SW4STM32 and SW4Linux fully supports the STM32MP1 asymmetric multicore Cortex/A7+M4 MPUs

   With System Workbench for Linux, Embedded Linux on the STM32MP1 family of MPUs from ST was never as simple to build and maintain, even for newcomers in the Linux world.
And, if you install System Workbench for Linux in System Workbench for STM32 you can seamlessly develop and debug asymmetric applications running partly on Linux, partly on the Cortex-M4.
You can get more information from the ac6-tools website and download two short videos (registration required) highlighting:

System Workbench for STM32

ADC/DMA transfer problem


I use a STM32F103VE in a custom board.
ADC1 is started every ms and do 6 conversions in about 750µs. Results are transfered over DMA to RAM.

Sequenz ist started with this code:

DMA1_Channel1->CCR &= ~DMA_CCR_EN; // DMA-Transfer start new
DMA1_Channel1->CNDTR = 6;
DMA1_Channel1->CCR |= DMA_CCR_EN;
ADC1->CR2 |= ADC_CR2_ADON; // Start ADC1

After Start of DMA DMA1_Channel1->CNDTR counts to 5 and transfers last adc data to first RAM location.
Next five ADC conversions are transfered too end DMA ends while ADC do his last conversion.
So, first result in RAM isnt OK.

If I read ADC-DR before start of DMA, same result.
It seems DMA or ADC has a request stored and this request ist not to kill?

What is the solution for this problem?


I have successfully utilized the ADC with DMA on F0 and F3 devices, although I’ve never tried it on a F103 device, and have accomplished this in ‘bare metal’ mode without using the ST HAL, so hopefully my advice will be applicable to your situation.

The first thing I’d recommend is that you keep the DMA channel disabled when writing to the various DMA channel configuration registers (CNDTR, CMAR, CPAR, CCR etc.). Set DMA_CCR_EN only after all other DMA channel settings have been set.I can’t say for certain if this is an absolute requirement - the ST reference docs are unclear on this point, but it’s what I do in my own ADC DMA init routine.

Also make sure that the ADC is enabled but idle (not converting) before setting DMA_CCR_EN. Explicitly clear the contents of ADCx->ISR before enabling DMA; e.g. ADC1->ISR = 0x000000FF

If you are using DMA in circular buffer mode, make sure that both DMA_CCR_CIRC -and- ADC_CFGR1_DMACFG are set.

You should be setting ADC_CR2_ADON and doing other ADC initialization tasks before enabling the ADC. Unless you’re concerned about power consumption, you should leave the ADC enabled (ADON set) and set ADC_CR_ADSTART to trigger a conversion sequence.

Here’s the basic sequence of operations I use to run the ADC in software triggered mode with DMA:

- Disable DMA channel associated with ADC (clear DMA_CCR_EN)
- Reset ADC by setting RCC_APB2RSTR_ADCRST, enable its clock (set RCC_APB2ENR_ADC1EN), clear RCC_APB2ENR_ADC1EN
- Configure ADC clock source (I set RCC_CR2_HSI14ON)
- Disable ADC (set ADC_CR_ADDIS), wait for disable confirm (ADC_CR_ADEN clear)
- Initiate ADC autocalibration process (set ADC_CR_ADCAL, wait for ADC_CR_ADCAL to be cleared by hardware)
- Set ADC config regisers to desired setup (ADC1->CFGR1, CFGR2, SMPR, TR, CHSELR, ADC->CCR)
- Clear ADC1->ISR (ADC1->ISR = 0x000000FF)
- Enable ADC (set ADC_CR_ADEN, wait for ADC_ISR_ADRDY set)
- Clear ADC1->ISR again
- Enable any desired ADC interrupts, configure NVIC

- Initialize DMA channel associated with ADC (for the STM32F091 that I’m familiar with, this is DMA->Channel1)
- Enable DMA channel (set DMA_CCR_EN)

- Set ADC_CR_ADSTART to trigger a conversion sequence
- Wait for ADC_ISR_EOSEQ set
- Read contents of DMA buffer for conversion results

When using DMA in non-circular (linear buffer) mode, I’m not certain how much DMA re-initialization is required - again, the reference docs are unclear on this point, but I clear DMA_CCR_EN, re-initialize CPAR, CMAR and CNDTR, then set DMA_CCR_EN before triggering a new conversion. I have confirmed that none of this re-init is needed if using DMA in circular buffer mode.

Essentially, I repeat the last 5 steps of the procedure I outlined above for each conversion sequence I run.

Hope this helps...

Hello MSchultz,

thanks for your reply.
Of course I configure the DMA registers only if DMA is disabled and the ADC is after initialisation and calibration always in idle mode to avoid delays.
The Initialisation sequenz I use ist mostly the same you have described. But may be the ADCs of F0 and F3 are a little different to F1 because some register bits or registers are not available in F103 (ADC_CFGR1_DMACFG, ADC_CR_ADSTART, ADC_CR_ADDIS ...). I dont use Standart Peripheral Library but HAL-Driver, so some abbreviations may be different. Like you I dont use library functions but work on register level.

The mayor difference I see, is that you use the interrupt and clear interrupt bits. I only read results of conversion every ms without interrupt and I don´t clear interrupt bits. In my opinion there is a difference between interrupt and DMA request. Interrupt occures only at the end of conversion sequenz, DMA request but after every single conversion. In earlyer applications (with µVision from Keil) I only disabled DMA (ADC has finished) and then reenabled DMA and enabled ADC. All works fine without notice of interrupt bits. Now with SW4STM32 and HAL libraries (I dont know if this is the problem) it seems that a DMA request is pending after ADC has finished and data are read as soon as DMA is enabled though the ADC is not enabled.

Now when I read the ADC_DR register and clear and set the ADC_CR2_DMA bit it works like expected (last days trial).

I´m confused about that. This was not was I expected. Are there relations between ADC interrupt bit and DMA request bit.
STs manuals are not sufficient in view of that.


The register bit names I use are the ones defined in the CMSIS headers - what you get if you #include “stm32f103xe.h” (or in my case, “stm32f091xc.h”)

I just took a quick look at the F103 ADC and it looks like it’s closer to the ADC on the F3 part than the one on the F0 series. You are correct that some of the register and bit names are different. The names may be a bit different (ADC1->CR1 instead of ADC1->CFGR1 for example) and the bit names are different in some cases too. When I was responding to your original request for help I was looking at the ADC code I had developed for my F091 project. I had to use different code for my F303 project, although it was based on the ADC library I created for the F0 ADC.

I apologize for the confusion this may have caused. I’ve not done much with the F1 series devices yet, and was operating under the (incorrect) assumption that the F1 ADC was the same as the one on the F0 parts.

I don’t use ADC interrupts in my F0 project(s), but I do test the interrupt status flags for the ADC in some places, as this is the only way to tell when certain events (end of conversion, end of sequence) occur. So I’m careful to explicitly clear the ADC1->ISR following init or prior to starting a new conversion.

I’m not certain just what generates the “trigger” for the DMA. It may be the EOC (end of conversion) ISR bit, it may be something else.

Hello MSchultz,

you don´t have to apologize. I use the same chip header files from ST like you, but in some cases I use definitions from the new HAL or LL libraries, becouse I use SW4STM2 and HAL. ST has no lucky hand in changing from Standard Peripheral Library to this new libraries. Confusions are not to avoid. But one part of the problem lies in the the manuals. There are too less informations.

ADC produce a request to DMA. It can not be the EOC (EOS) perhaps in a sequenz with some conversions, request is send to DMA after every single conversion. In my case DMA has a pending request when it is started. And I don´t know where to clear this request. It seems to be, that the combination “read ADC->DR and clear/set ADC_CR2_DMA” clears this flag. So, perhaps the request is an ADC flag? But in the manuals from ST you don´t read any word about this.

My confusion is: in earlyer programms I wrote (µVision, SPL) for the same purpose, I dont have to clear any flag!