L

LambdaSpeak Speech Synthesizer, Sample Player, RTC, MP3, Serial Interface, MIDI

Started by LambdaMikel, 08:56, 01 May 17

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

LambdaMikel

Quote from: pelrun on 04:46, 10 February 18XMega is obsolete these days, and is full of hardware bugs - so it's probably better to consider another architecture if you decide AVR isn't sufficient.

... you mean more obsolete than Z80?  ;) I thought XMega was introduced in 2010... I think you mean ATMega?

LambdaMikel

#126
Quote from: pelrun on 04:46, 10 February 18How easy it is to port to a new microcontroller depends mostly on how you structured your code.

It think most of the work is in configuring all the hardware registers. Sure, you can hide and structure all this behind macros and function calls, i.e., "configureSPI", (and I did that), but still, the work is IN the functions, not in structuring in functions.

I will need many hours to figure out how to configure SPI on an XMega. Or how to set up overflow timer interrupt. etc. These hardware register things are very specific to a certain MCU (even within the ATMega family).

Even if in principle all these features will also be available on the XMega, configuring the hardware registers correctly will take days for a beginner without experience on that platform. I know how long it took me to learn that for ATMega  :D

So, that seems to be a lot of work only for supporing a feature such as firmware updates via USB (and I am not sure if there will be many firmware updates anyway - I hope we get it mostly right from the beginning, then there wouldn't be a need for a lot of updates). Not saying that it is not a great idea; I am sure Bryce is looking into various ideas for realizing it. I am just saying that I don't want to switch to a completely different platform just for this. But I sure like the firmware update idea!

pelrun

Obsolete as in there's zero advantage to choosing an xmega instead of Atmel's ARM chips. They still make them to support existing designs, but you'd be ill-advised to make something new with one. The ATmegas aren't entirely obsolete only because people still want to make 5V-based designs; to move to Xmega requires changing over to 3.3v - and if you're already doing that it's cheaper and better to switch to ARM at the same time.

And my "structuring" point is that there shouldn't be *any* "configuring hardware registers" in your application code. It shouldn't care how you send a byte to an SPI device, just that it wants to send one. That not only significantly improves the portability of your code, it also makes the code self-documenting. (A line that says "spi_write(data)" tells you more than "SPDR = data; while(!(SPSR & (1<<SPIF) ));", even if that's ultimately what gets done.)

Heck, on STM32 the dev tools *autogenerate* nearly all my hardware driver code (it's not perfect, but it's a good start). I just needed to write a shim that took the parameters passed to my hardware abstraction layer and sent them to the platform API. Stuff writing all that low-level bitbashing from scratch every time.

LambdaMikel

#128
Quote from: pelrun on 11:54, 10 February 18And my "structuring" point is that there shouldn't be *any* "configuring hardware registers" in your application code. It shouldn't care how you send a byte to an SPI device, just that it wants to send one. That not only significantly improves the portability of your code, it also makes the code self-documenting.

Indeed, most of that is in terms of macros and inline functions and #ifdef's in the header files. I did all that. The main point still being - somebody STILL has to write that header file and configuration file :-) Even with Arduino (to certain extent, they are able to abstract XMega and ATMega into one!), sometimes hardware registers need to be configured... and we all know how nasty C code gets with all that conditional compilation. So, I get all that, but it also adds a level of complexity which can backfire and "don't care hardware abstraction" on that level is an illusion, really.

Somebody has to care for the hardware details :-) I guess that's also a difference between application developer and embedded systems developer. Hey, I am well aware of abstraction, I was programming in Common Lisp mostly for the last 20 years.  ;D

LambdaMikel

Quote from: LambdaMikel on 17:32, 10 February 18
Indeed, most of that is in terms of macros and inline functions and #ifdef's in the header files. I did all that. The main point still being - somebody STILL has to write that header file and configuration file :-) Even with Arduino (to certain extent, they are able to abstract XMega and ATMega into one!), sometimes hardware registers need to be configured... and we all know how nasty C code gets with all that conditional compilation. So, I get all that, but it also adds a level of complexity which can backfire and "don't care hardware abstraction" on that level is an illusion, really.

Somebody has to care for the hardware details :-) I guess that's also a difference between application developer and embedded systems developer. Hey, I am well aware of abstraction, I was programming in Common Lisp mostly for the last 20 years.  ;D

So I totally agree that there should be minimal if none hardware details and registers in the application code itself, but nonetheless that still means that I will have a very large amount of work in the hardware abstraction layer (HAL) if I switch to a different MCU. Unfortunately, if you are the person writing the HAL and the application code yourself, then you are the one who has the work  :P

Sent from my ZTE B2017G using Tapatalk

LambdaMikel



Quote from: pelrun on 11:54, 10 February 18
Heck, on STM32 the dev tools *autogenerate* nearly all my hardware driver code (it's not perfect, but it's a good start). I just needed to write a shim that took the parameters passed to my hardware abstraction layer and sent them to the platform API. Stuff writing all that low-level bitbashing from scratch every time.

But that is one MCU family, right? Here we are talking about hardware abstraction over different families I think.  Xmega and atmega seem to be quite different. So it is more like for Arduino I think where you want to have common abstractions for ATMega and XMega. But Arduino is also slow... abstraction has its price. For example, many people noticed that the Arduino PIN IOs are significantly slower than the native ATMega / XMega ones. Of course, that can be avoided if you are the person writing the Hardware Abstraction Layer, but then you are also the person that has the work.

Sent from my ZTE B2017G using Tapatalk

pelrun

(Apologies for the big post. Also, don't come away from this thinking I'm ordering you to do it this way, I'm just trying to impart some of my own experience in case it proves useful.)

There are multiple ways of doing conditional compilation. #ifdefs are messy, as you say, hard to read, difficult to maintain, and easy to get wrong. I much prefer putting the hardware specific code into a one or more completely separate files for each platform and only compiling/linking in the appropriate ones for the current build - that also makes refactoring far easier (because if you find yourself writing direct hardware access code in a file that should be application code only, you automatically know it needs to move, instead of trying to keep track of what set of #ifdefs need to wrap it.)

Quote from: LambdaMikel on 18:08, 10 February 18
But that is one MCU family, right? Here we are talking about hardware abstraction over different families I think.

I modified a codebase that was originally bitbashing AVR all through the code, and now compiles on both AVR *and* ARM. There's now two files included by the orignal Makefile that are the only place that includes AVR-specific code (one for each of the 3 AVR boards supported, and another that holds AVR code common to all 3) and a separate ARM project that supplies it's own implementation of those functions and includes everything except the AVR files.

Quote
But Arduino is also slow... abstraction has its price. For example, many people noticed that the Arduino PIN IOs are significantly slower than the native ATMega / XMega ones. Of course, that can be avoided if you are the person writing the Hardware Abstraction Layer, but then you are also the person that has the work.

Abstractions aren't inherently slow, and often they can be completely zero cost. The arduino digitalWrite functions are horribly slow because of how they were designed, not because they're an abstraction; there are direct replacements with identical semantics that are thin wrappers around the direct hardware access and as fast as doing it yourself.

Also, when I say "Hardware Abstraction Layer", I don't mean you have to write a full general-purpose library that wraps all the peripherals in a platform-agnostic manner (let the manufacturer do that). It's just choosing where to cut through the code so the application code is always separated from the hardware implementation by a function call, and (as mentioned earlier) ideally in different files. It doesn't even have to be at the peripheral level - it can be much higher. And it doesn't have to involve any more work than choosing function names (at least for your original platform) and moving code into those new functions. Done right it can actually *save* you effort, because you waste less time chasing bugs/juggling messy code.

In a trivial example:


void dtvlow_state_off(void)
{
  DTVLOW_DATACLK_DDR   &= ~(DTVLOW_DATA_MASK|DTVLOW_CLK_MASK);
  DTVLOW_DATACLK_PORT  |=   DTVLOW_DATA_MASK|DTVLOW_CLK_MASK;
  DTVLOW_ACKRESET_DDR  &=  ~DTVLOW_ACK_MASK;
  DTVLOW_ACKRESET_PORT |=   DTVLOW_ACK_MASK;
}


became


void dtvlow_state_clear(void){
  dtvlow_rst(1);
  dtvlow_data(0b111);
  dtvlow_clk(1);
  dtvlow_ack(1);
}


and a bit of the hardware side (in different files):

AVR:

void dtvlow_data(uint8_t val) {
  uint8_t out;
  uint8_t data = val&0x07 << DTVLOW_DATA_SHIFT;
  DTVLOW_DATACLK_DDR &= ~DTVLOW_DATA_MASK;
  DTVLOW_DATACLK_DDR |= (~data) & DTVLOW_DATA_MASK;
  out = DTVLOW_DATACLK_PORT;
  out &= ~DTVLOW_DATA_MASK;
  out |= data;  DTVLOW_DATACLK_PORT = out;
}



ARM:

void dtvlow_data(uint8_t val)
{
  HAL_GPIO_WritePin(GPIOB, UP_Pin, val&1);
  HAL_GPIO_WritePin(GPIOB, DOWN_Pin, val&2);
  HAL_GPIO_WritePin(GPIOB, LEFT_Pin, val&4);
}


The abstraction I chose hid any indication that it was done through GPIO, and only exposes the semantics of the connection. It's *slightly* less quick than the original, but combining those register writes didn't provide any useful speedup, and in fact served to *obscure* what the code was trying to do (and the C optimizer can often do a better job collapsing these than you can.) Elsewhere in the same file those 'optimisations' actually hid the fact that the algorithm itself was suboptimal, and teasing it apart allowed me to fix some bugs. Notably, the original code wasn't properly implementing open-drain signalling, and was haphazardly switching between input and output mode in order to get something that worked, and I replaced it with something that does it correctly and consistently. (on the STM32 you just set a config flag on the pin and forget about it.) After I did that, it turned out that a few different functions were actually functionally identical, but implemented differently - this redundancy was all cleaned up (which is why "dtv_state_off" became "dtv_state_clear".)

LambdaMikel

Quote from: pelrun on 02:30, 11 February 18Also, when I say "Hardware Abstraction Layer", I don't mean you have to write a full general-purpose library that wraps all the peripherals in a platform-agnostic manner (let the manufacturer do that). It's just choosing where to cut through the code so the application code is always separated from the hardware implementation by a function call, and ideally in different files (which naturally enforces the separation.) It doesn't even have to be at the peripheral level - it can be much higher. And it doesn't have to involve any more work than choosing function names (at least for your original platform) and moving code into those new functions.

Yes, I like that. The Arduino libraries are a bad example, probably, but I can also see how that happens - the more platforms you have to support, the more effort it also becomes to maintain such a HAL layer that does a good job among all platforms. And I guess then they also chose at some point the "good enough" principle... after all, its all opportunity costs and money and effort.

Especially for a hobby project :-) 

I have similar abstractions, btw. For example, instead of writing directly to the PORTx's, I have an inline function / macro "TO_CPC(value)". Same for reading etc. My code supports the different PCB version that I have made over the last year (LambdaSpeak 1.5., Lambda 1.8, LambdaSpeak 1.9, and LamdaSpeak 2.0), some of which use different PIN layouts etc. without having to change anything in the application code (you only set a #define LS<Version> in the code, the rest is handled by conditional includes, each platorm / PCB version has its own .h file that defines the PIN configurations etc.) Yes, that saves a lot of work between different versions in the application code, and of course makes it much more maintainable.

I believe I could also add an implementation of "TO_CPC(var)" and "FROM_CPC(var)" for other MCUs, but I will have to write these macros. Where this will be the easy part, other things are not that easy. For example, it took me 40 minutes to figure out how to disable JTAG on the Atmega 644... the first 10 Google hits didn't compile for me ;-) So, there is of course now a function DISABLE_JTAG in the 644 header file...  but figuring that out for a different CPU will again take 40 minutes. These things just add up.

This is what I mean. I agree with everything you wrote, one should program / develop like that. Having  code that compiles even on different MCU families / platforms is a great achievement!

pelrun

Quote from: LambdaMikel on 02:50, 11 February 18
Yes, I like that. The Arduino libraries are a bad example, probably, but I can also see how that happens - the more platforms you have to support, the more effort it also becomes to maintain such a HAL layer that does a good job among all platforms. And I guess then they also chose at some point the "good enough" principle... after all, its all opportunity costs and money and effort.

Also, the arduino libraries were written with the intent of being totally noob friendly, not being a high-performance environment for professional code. And they use C++ in a way that has to do a large amount of redundant work at runtime, even though you could write the same code in a way that all the abstractions collapse at compile time and the runtime code is almost optimal (but this is deep wizardry in C++.)

I'm actually really interested in where the new Rust language is going, because it's designed as a C/C++ replacement with rock-solid and straightforward type/memory safety that all magically collapses to nothing during compilation (you put all the error checks in the source code, and the compiler proves which ones are unnecessary for *each* call and drops them!) - unfortunately the embedded system support is still under heavy development.

Quote
Especially for a hobby project :-) 

But this is a product! You're already putting in the effort to make it a quality piece of kit, so don't think of it as "just a hobby project".

Quote
I believe I could also add an implementation of "TO_CPC(var)" and "FROM_CPC(var)" for other MCUs, but I will have to write these macros.

Macros are a *really crappy* replacement for true functions, and I'd *strongly* suggest changing them over, even if you do nothing else. You can still get inlining if you absolutely must, but it's almost always unnecessary, even on an AVR.

And you don't have to write implementations for any other MCU than the one you're currently using, because you're in control of the hardware. But if you at least keep "separation of concerns" in mind from the start, it's not nearly as painful if/when you hit the limits of your current chip choice and are forced to change the hardware design. At which point, you're going to have to write the new code *anyway*.

LambdaMikel

First steps with the DKtronics emulation:


https://youtu.be/FYlcykW1D4A

It seem that also Roland in Space and Alex Higgens World Pool support DKtronics speech.
Could be added to

http://www.cpcwiki.eu/index.php/Dk%27tronics_Speech_Synthesizer



Gryzor

Watching the vid a few pages back had me thinking of this:



https://www.youtube.com/watch?v=Fc7xF17O-Ck


...until I heard the phrase coming out of it :D


The singing part is just excellent.

LambdaMikel

out &fbee,&c7...  8)
has about 25 quotes

LambdaMikel

#137
Quote from: Bryce on 21:22, 08 February 18:picard: We did have three seperate signals from the CPLD to the AVR, one for each mode. YOU told me to go to a one wire system.  :D In fact, I think my last version of the schematic probably still has them in. I think your reason was that scanning these inputs meant that the AVR wouldn't reply to the CPC fast enough.

Changed my mind again - LambdaSpeak 1.8 has just been upgraded to also support DKtronics emulation, without adding another line from the decoder.

LambdaSpeak 1.8 has only one line from the address decoder GAL, that triggers the databus input buffer flipflop (a 74LS374), and also goes to an Atmega pin. The GAL now decodes both DKtronics and SSA1 addresses, and it seems that the existing software - including the SSA1 RSX driver and the DKtronics RSX driver - are not confused by that. Since the SSA1 is now seeing the requests being made for a DKtronics, and vice versa, I was concerned that the software might get confused and detect "the other device" and then starts malfunctioning, but fortunately this is not the case. It is good that, in addition to the different ports, SSA1 and DKtronics are also using different "protocols" (i.e., ready bits etc) Otherwise, that could have been problematic.

That means that the LambdaSpeak 1.8 board can do DKtronics emulation also (even though it wasn't designed to do it), with one shared IO request wire. A second wire would have been problematic anway, and I would have needed another 74LS32 Or gate in order to trigger the input buffer flipflop from 2 separate GAL outputs (no more pins left for another dedicated wire from the GAL to the input buffer flipflop clock).

Bryce

I'll just sit here and drink some coffee until you have stopped sounding like my wife changing your mind every 5 minutes :D

Bryce.

LambdaMikel

Quote from: Bryce on 08:55, 12 February 18I'll just sit here and drink some coffee until you have stopped sounding like my wife changing your mind every 5 minutes :D

The breadboard has 2 wires, and that's fine, but not strictly necssary, so let's keep the 2 wires for LambdaSpeak 2.0, why not. It is certainly the "safer" design, even though it doesn't seem to matter.

LambdaMikel


LambdaMikel

Quote from: Bryce on 19:20, 09 February 18Don't you have a ROMBoard/MegaFlash type device to use the DKTronics ROM?

Tested now:

https://youtu.be/wxNSlEyfPMc
Also shows different speak rates and voices.

zhulien

Quote from: LambdaMikel on 20:09, 11 February 18
LambdaSpeak 1.8 has only one line from the address decoder GAL, that triggers the databus input buffer flipflop (a 74LS374), and also goes to an Atmega pin. The GAL now decodes both DKtronics and SSA1 addresses, and it seems that the existing software - including the SSA1 RSX driver and the DKtronics RSX driver - are not confused by that. Since the SSA1 is now seeing the requests being made for a DKtronics, and vice versa, I was concerned that the software might get confused and detect "the other device" and then starts malfunctioning, but fortunately this is not the case. It is good that, in addition to the different ports, SSA1 and DKtronics are also using different "protocols" (i.e., ready bits etc) Otherwise, that could have been problematic.

That means that the LambdaSpeak 1.8 board can do DKtronics emulation also (even though it wasn't designed to do it), with one shared IO request wire. A second wire would have been problematic anway, and I would have needed another 74LS32 Or gate in order to trigger the input buffer flipflop from 2 separate GAL outputs (no more pins left for another dedicated wire from the GAL to the input buffer flipflop clock).


That is correct, there is no confusion between them and software is in fact quite easy to port from one to the other - exactly same logic just different port.  I wonder why DkTronics didn't just use the same ports as Amstrad did with the SSA-1.  On the CPC, both can currently co-exist without confusion too, but there is quite a bit of line noise with 2 speech synths going from a splitter to the audio jack - that is likely a totally separate issue.  With Lambdaspeak, I noticed you have opted for an audio jack on it.  If you wanted (perhaps via a jumper), you could route that audio back to the sound pin on the bus too so that the speech will come out of the inbuilt CPC speaker (or is that an external Power Supply not audio jack?) - this trick also works with an AMDRUM - which I wonder... can you emulate also with the Lamdaspeak?  That would be super-awesome!  It is just an 8bit D/A converter using port FF to play drum machine music.  I think the RAM Music Machine is similar too but that thing has a lot more functions than just the D/A converter.  There is no buffer or anything with the AMDRUM, it just plays as fast or slow as you make the Z80 send data.



https://www.youtube.com/watch?v=0z-jGDOzwGQ

LambdaMikel

#143
Quote from: zhulien on 15:08, 13 February 18If you wanted (perhaps via a jumper), you could route that audio back to the sound pin on the bus too so that the speech will come out of the inbuilt CPC speaker (or is that an external Power Supply not audio jack?) - this trick also works with an AMDRUM - which I wonder... can you emulate also with the Lamdaspeak? 

The little daughter board is the TextToSpeech click board from MikroElektronika:

https://www.mikroe.com/text-to-speech-click

Bryce is evaluating if he is going to keep the click board solution, or if he will put the Epson IC directly on the PCB (with OpAmp and IN/OUT audio jacks). Bryce, what's your take on the AUDIO output -> CPC speaker part?


Amdrum rocks!! 8) Yes, it indeed sounds quite easy to do - that means, just add a DA converter and let the CPLD decode the Amdrum address? I can try this on my breadboard prototype.

Not sure if Bryce would want to add it though (if it worked).

Do you have a suggestion for a DAC for this experiment?

zhulien

for me, as an M4 card with SSA-1 and Dk'Tronics and 'if possible' Amdrum capability I'd love to buy at least 1... I recommend a jumper to wire it to the CPC sound pin as some people likely won't want it - but then others will - it will be mono but automatically blend with the CPC audio.

LambdaMikel

Quote from: zhulien on 19:09, 13 February 18\I recommend a jumper to wire it to the CPC sound pin as some people likely won't want it - but then others will - it will be mono but automatically blend with the CPC audio.

I would like that, too. I am not a big fan of all the wires and external speakers etc. Even if sound quality is less great of course. I guess it depends on whether Bryce wants to do an Audio section, really, or if he is just going to use the click! board. In the former case, it should indeed be as easy as putting a jumper on the board. In the latter, click! board unfortunately does not have audio output other than the jack.

remax

Quote from: zhulien on 19:09, 13 February 18
for me, as an M4 card with SSA-1 and Dk'Tronics and 'if possible' Amdrum capability I'd love to buy at least 1...


I second that


Quote from: LambdaMikel on 19:14, 13 February 18I would like that, too.  I am not a big fan of all the wires and external speakers etc. Even if sound quality is less great of course.
I second that too
Brain Radioactivity

LambdaMikel

Quote from: zhulien on 15:08, 13 February 18It is just an 8bit D/A converter using port FF to play drum machine music.  I think the RAM Music Machine is similar too but that thing has a lot more functions than just the D/A converter.  There is no buffer or anything with the AMDRUM, it just plays as fast or slow as you make the Z80 send data.


I am wondering if that could even come "for free" with the current LambdaSpeak hardware. I will try if I can just do this in software on the ATmega 644, and use PWM for the output with a simple filter. That might be fast enough. I mean, the 644 is running at 20 Mhz...

LambdaMikel

#148
... I mostly found CDT Tape Images of Amdrum. Ok, one DSK for the main program, but all the drumkits I found are tapes. Does anyone have the link to DSK files for the drum sets?

@Bryce - we hae PB4 / OCOB available! That is a PWM pin... I will try if DA conversion will be fast enough in "Amdrum mode" (control byte &E3 ) using PWM. Then, I guess we only need a lowpass filter (RC) at the pin to feed it into an OpAmp if that worked?

Bryce

Quote from: zhulien on 15:08, 13 February 18
I wonder why DkTronics didn't just use the same ports as Amstrad did with the SSA-1.

Because the DKTronics Speechsynth came out BEFORE the SSA-1. It's hard to copy something that doesn't exist :D

Regarding Audio: My plan is still to have an audio in port so that the stereo output of the CPC can be mixed with the dual mono output of the speech. I can add a jumper to feed it back into the expansion port too if people want that. @LambdaMikel : It will most likely be cheaper to put the parts on the PCB rather than use a Clickboard.

Amdrum: The PCB already has everything it needs to emulate the Amdrum, you'd just need to feed a PWM pin from the AVR to the audio amplifier and write lots more code :)

Bryce.

Powered by SMFPacks Menu Editor Mod