CMSIS-NN  Version 3.0.0 CMSIS NN Software Library
Softmax Functions

## Functions

void arm_softmax_q15 (const q15_t *vec_in, const uint16_t dim_vec, q15_t *p_out)
Q15 softmax function. More...

void arm_softmax_q7 (const q7_t *vec_in, const uint16_t dim_vec, q7_t *p_out)
Q7 softmax function. More...

void arm_softmax_s8 (const int8_t *input, const int32_t num_rows, const int32_t row_size, const int32_t mult, const int32_t shift, const int32_t diff_min, int8_t *output)
S8 softmax function. More...

void arm_softmax_u8 (const uint8_t *input, const int32_t num_rows, const int32_t row_size, const int32_t mult, const int32_t shift, const int32_t diff_min, uint8_t *output)
U8 softmax function. More...

void arm_softmax_with_batch_q7 (const q7_t *vec_in, const uint16_t nb_batches, const uint16_t dim_vec, q7_t *p_out)
Q7 softmax function with batch parameter. More...

## Description

EXP(2) based softmax functions.

## Function Documentation

 void arm_softmax_q15 ( const q15_t * vec_in, const uint16_t dim_vec, q15_t * p_out )
Parameters
 [in] vec_in pointer to input vector [in] dim_vec input vector dimention [out] p_out pointer to output vector

Here, instead of typical e based softmax, we use 2-based softmax, i.e.,:

y_i = 2^(x_i) / sum(2^x_j)

The relative output will be different here. But mathematically, the gradient will be the same with a log(2) scaling factor.

 void arm_softmax_q7 ( const q7_t * vec_in, const uint16_t dim_vec, q7_t * p_out )
Parameters
 [in] vec_in pointer to input vector [in] dim_vec input vector dimention [out] p_out pointer to output vector

Here, instead of typical natural logarithm e based softmax, we use 2-based softmax here, i.e.,:

y_i = 2^(x_i) / sum(2^x_j)

The relative output will be different here. But mathematically, the gradient will be the same with a log(2) scaling factor.

Referenced by arm_softmax_with_batch_q7().

 void arm_softmax_s8 ( const int8_t * input, const int32_t num_rows, const int32_t row_size, const int32_t mult, const int32_t shift, const int32_t diff_min, int8_t * output )
Parameters
 [in] input Pointer to the input tensor [in] num_rows Number of rows in the input tensor [in] row_size Number of elements in each input row [in] mult Input quantization multiplier [in] shift Input quantization shift within the range [0, 31] [in] diff_min Minimum difference with max in row. Used to check if the quantized exponential operation can be performed [out] output Pointer to the output tensor
Note
Supported framework: TensorFlow Lite micro (bit-accurate)

References ACCUM_BITS, CLAMP, DIV_POW2, DIV_POW2_MVE, EXP_ON_NEG, MAX, MUL_SAT, MUL_SAT_MVE, and ONE_OVER1.

 void arm_softmax_u8 ( const uint8_t * input, const int32_t num_rows, const int32_t row_size, const int32_t mult, const int32_t shift, const int32_t diff_min, uint8_t * output )
Parameters
 [in] input Pointer to the input tensor [in] num_rows Number of rows in the input tensor [in] row_size Number of elements in each input row [in] mult Input quantization multiplier [in] shift Input quantization shift within the range [0, 31] [in] diff_min Minimum difference with max in row. Used to check if the quantized exponential operation can be performed [out] output Pointer to the output tensor
Note
Supported framework: TensorFlow Lite micro (bit-accurate)

References ACCUM_BITS, CLAMP, DIV_POW2, EXP_ON_NEG, MAX, MUL_SAT, and ONE_OVER1.

 void arm_softmax_with_batch_q7 ( const q7_t * vec_in, const uint16_t nb_batches, const uint16_t dim_vec, q7_t * p_out )
Parameters
 [in] vec_in pointer to input vector [in] nb_batches number of batches [in] dim_vec input vector dimention [out] p_out pointer to output vector

Here, instead of typical natural logarithm e based softmax, we use 2-based softmax here, i.e.,:

y_i = 2^(x_i) / sum(2^x_j)

The relative output will be different here. But mathematically, the gradient will be the same with a log(2) scaling factor.

References arm_softmax_q7().