I have noticed intermittant data corruption while using the SPI on an LPC 2103:
If the master is generating the clock so the slave can provide data to the master and if a SPDR 'read delay' is not used in the SPI ISR prior to reading the data (and sending the next clock byte) the next data byte received is corrupted.
According to NXP (which took great care I see in describing the terminating conditions of each device):
"When a master, this bit is set at the end of the last cycle of the transfer. When a slave, this bit is set on the last data sampling edge of the SCK."
I read this as the following (crude drawings included):
If this is the last SCK pulse, the master sets its SPIF when the end of the last cycle is over:
---\ \ set \ | \| | SCK
If this is the last SCK pulse, the slave sets its SPIF on the last data sampling edge:
set | ---\ |\ | \ \ SCK
I have read the 2103 errata concerning the SPI if CPHA = 0 but the settings for these devices are CPOL = 0, CPHA = 1, so that is not the problem.
The SPSR is read prior to the SPDR (with delay in between the SR and data register accesses. If the delay is set to 8 iterations or more, there is no data corruption (delay via NOPs or equivalent).
The master SPI clock supports NXP requirement: "As a result, bit 0 must always be 0. The value of the register must also always be greater than or equal to 8."
Is it possible that this corruption is a result of the timing differences between the master and slave SPIF notifications or does this seem to be something more fundamental? Or has anyone else ever seen this?
Quite a number of SPI slaves have problems reacting to a byte transfer, to pick up the byte from the master and emit a new outgoing byte.
There will alsways be a race condition, unless the SPI slave has at least one read buffer and one write buffer so the program can get the full byte transfer time to respond, instead of having a half bit-time to do the read+write.
Without buffering, the SPI will only consist of a latching shift register, where read+write must be performed between two shifts.
A number of the NXP chips have not only one or more SPI devices, but also one or more SSP devices, where the SSP device can be configured to use the SPI protocol, but with full receive and transmit FIFO.
So basically the data corruption can only be prevented by instituting these wait times to allow for sufficient next data byte processing overhead by the slave.
So if my crude drawings were accurate, then the only delay the slave would have to load its next data byte is really just the ramp down time of the trailing edge of the SPI clock pulse.
If you update data on one phase of the clock cycle, and have the other side sample the data on the other phase, then you have half a clock period to react and perform.
This is the reason why it is important to read through the SPI documentation for a processor, to make sure that the chip has any buffering, in case higher SPI speeds should be used. For the master side, it's mainly a question of being able to keep the cable filled. For the slave side, it's also a question of avoiding overrun where the slave doesn't manage to read out one byte and write a new byte before the master continues ticking the clock line starting with the first bit of the next byte.
you have half a clock period to react and perform
During data transfers I agree with the 1/2 clock cycle. However, when the current 8 bit transfer is completed and a new transfer is signalled through the SPIF bit, the time it takes to load the shift register and then start the transfer is not dependent on the SCK, but rather the PCLK at that time.
If the peripherals run off of the PCLK and the SCK is a 'subset' of the PCLK, then the resolution (SCK start times) of the SCK is really the resolution of the PCLK.
Said another way, ther are 'x' number of PCLKs within one SCK. When the SCK starts, it starts on a PCLK clock.
So really, the only time guarantee you have is on the trailing edge of the last SCK.
Then I think you are making an incorrect assumption about intended use.
Any data the slave should send out at the start of a new transfer, should have been written to the shift register - or outbuffer or FIFO - way before.
That is also why some protocol use of SPI activates the slave select and then let the slave send one or more dummy bytes before starting to issue real data - to give the slave time to start emitting fresh data. Sending a byte first, gives the slave plenty of time to compute something and prepare for transmission.
That is also a big reason why many chips don't have support for automatically driven slave-select from the master. This allows the master to activate the slave select, indicating to the slave that it should prepare the first byte for a new transmission. The master can then, in software or using timers or similar, decide how long to wait until it starts the first transfer.
If the SPI slave is intended to always send current time on each transfer, then it can't insert the first byte of data on speculation. So the master must either activate the select signal with some margin, or the master must know that a four byte time stamp from the slave requires the master to send 5 bytes, where the first byte is just a dummy transfer.
In your case, you seem to be using CPHA=1. In that case, the SSEL signal will always go inactive between transfers. That makes sure that you get enough reaction time between two transfers.
By the way - do you need two SPI interfaces on the chip? If not, then I would recommend that you use SSP1 instead of SPI0. With FIFO support, it will automagically handle the read+write to keep the transfer ongoing. But same thing there - the slave must either prepare the first byte directly on SSEL activation, or even before SSEL. After that, it's enough to top up the outgoing FIFO and read out corresponding amounts of data from incomming FIFO.
Way before what - if it is an interrupt-driven transfer then the interrupt is the trigger for the next data byte. If there is no FIFO, then the mechanism for transfer can only be by this sequence. If the documentation is interpreted correctly, the slave has SCK trailing edge time to load the byte.
If the SPI slave is intended to always send current time on each transfer, then it can't insert the first byte of data on speculation
It would do so on a real-time event - the SPIF interrupt. There is no 'speculation' for an interrupt.
So the master must either activate the select signal with some margin, or the master must know that a four byte time stamp from the slave requires the master to send 5 bytes, where the first byte is just a dummy transfer.
I also dont buy into the 'preamble-type' approach to successful data transfer. If it came down to it, I would prefer to change the data to 16 bit and pad the leading or trailing bits as zeros to 'buy' time for the next transfer. Either way, the data transfer rate would be affected by whatever 'modification' is implemented.