Hello here my usart settings that it works fine
USAR.USART_BaudRate=9600; USAR.USART_StopBits=USART_StopBits_1; USAR.USART_WordLength=USART_WordLength_8b; USAR.USART_Parity=USART_Parity_No ; USAR.USART_HardwareFlowControl=USART_HardwareFlowControl_None; USAR.USART_Mode=USART_Mode_Rx | USART_Mode_Tx;
but when I want to decrease my error I want to use parity bit and I have changed settings as shown below
USAR.USART_BaudRate=9600; USAR.USART_StopBits=USART_StopBits_1; USAR.USART_WordLength=USART_WordLength_8b; USAR.USART_Parity=USART_Parity_Even ; USAR.USART_HardwareFlowControl=USART_HardwareFlowControl_None; USAR.USART_Mode=USART_Mode_Rx | USART_Mode_Tx;
but when I do the same settings in my pc to receive data ( for even parity bit) it doesn't receive anything.
After adding parity bit, have you tried increasing length to 9 bits?
USAR.USART_WordLength=USART_WordLength_9b;
Is there a 9-bit data mode on your device?
A parity bit does absolutely nothing to decrease errors!
It gives you a certain chance of detecting when an error has happened - but it does nothing to reduce the chance of it happening.
Have you checked the device datasheet to ensure that your combination of options is actually valid? Note that this is entirely defined by the specific chip hardware - it has nothing to do with Keil.
Thank you Zack Havens I didn't pay attention to bit settings. it works fine
Thanks Andrew Neil for your explanation
why do you say that?As I know parity bit was born to decrease fault receiving byte with a simple check. Yes we can't decrease Noise signal but we try to limit input fault data
That is so badly worded! I know what is meant, but it doesn't cover the point with any degree of success. If an error can be detected then something can be done to retry and correct. Thus errors can be decreased; or, at least, the number of errors slipping through the system can.
Just note that communications errors tends to happen in bursts. So you might get a dual-bit or thripple-bit error or even longer error runs.
A parity bit can only detect an odd number of bit errors in the character.
So lots of communication ignores parity bits. Either they use more complex transfer encodings where you add more than one bit to be able to also get error correction. This is normally done when having dedicated hardware. Or the messages are instead given "checksums" or error correction data as separate bytes in each packet.
A single 16-bit CRC can detect all combinations with an odd number of faulty bits. And it can handle one error burst of up to 15 sequential bits. So two bytes for a CRC-16 is normally better invested than having every single byte sent with an additional parity bit.Besides the issue that not all UART can do 8-bit + parity.
It can be argued that having a parity bit in every character would scale better with packet sie.
But it really doesn't scale well since each individual byte is so badly protected.
It's better to use a larger CRC or maybe even use a tw0-dimensional ECC.
Keil: Please consider your spam filter. Why should I need to use a zero in tw0-dimensional? And why isn't the '-' enough as separator?