[−][src]Module core::arch::arm  
Platform-specific intrinsics for the arm platform.
See the module documentation for more details.
Structs
| float32x2_t | [ Experimental] [ ARM] ARM-specific 64-bit wide vector of two packed  | 
| float32x4_t | [ Experimental] [ ARM] ARM-specific 128-bit wide vector of four packed  | 
| int16x2_t | [ Experimental] [ ARM] ARM-specific 32-bit wide vector of two packed  | 
| int16x4_t | [ Experimental] [ ARM] ARM-specific 64-bit wide vector of four packed  | 
| int16x8_t | [ Experimental] [ ARM] ARM-specific 128-bit wide vector of eight packed  | 
| int32x2_t | [ Experimental] [ ARM] ARM-specific 64-bit wide vector of two packed  | 
| int32x4_t | [ Experimental] [ ARM] ARM-specific 128-bit wide vector of four packed  | 
| int64x1_t | [ Experimental] [ ARM] ARM-specific 64-bit wide vector of one packed  | 
| int64x2_t | [ Experimental] [ ARM] ARM-specific 128-bit wide vector of two packed  | 
| int8x4_t | [ Experimental] [ ARM] ARM-specific 32-bit wide vector of four packed  | 
| int8x8_t | [ Experimental] [ ARM] ARM-specific 64-bit wide vector of eight packed  | 
| int8x16_t | [ Experimental] [ ARM] ARM-specific 128-bit wide vector of sixteen packed  | 
| poly16x4_t | [ Experimental] [ ARM] ARM-specific 64-bit wide vector of four packed  | 
| poly16x8_t | [ Experimental] [ ARM] ARM-specific 128-bit wide vector of eight packed  | 
| poly8x8_t | [ Experimental] [ ARM] ARM-specific 64-bit wide polynomial vector of eight packed  | 
| poly8x16_t | [ Experimental] [ ARM] ARM-specific 128-bit wide vector of sixteen packed  | 
| uint16x2_t | [ Experimental] [ ARM] ARM-specific 32-bit wide vector of two packed  | 
| uint16x4_t | [ Experimental] [ ARM] ARM-specific 64-bit wide vector of four packed  | 
| uint16x8_t | [ Experimental] [ ARM] ARM-specific 128-bit wide vector of eight packed  | 
| uint32x2_t | [ Experimental] [ ARM] ARM-specific 64-bit wide vector of two packed  | 
| uint32x4_t | [ Experimental] [ ARM] ARM-specific 128-bit wide vector of four packed  | 
| uint64x1_t | [ Experimental] [ ARM] ARM-specific 64-bit wide vector of one packed  | 
| uint64x2_t | [ Experimental] [ ARM] ARM-specific 128-bit wide vector of two packed  | 
| uint8x4_t | [ Experimental] [ ARM] ARM-specific 32-bit wide vector of four packed  | 
| uint8x8_t | [ Experimental] [ ARM] ARM-specific 64-bit wide vector of eight packed  | 
| uint8x16_t | [ Experimental] [ ARM] ARM-specific 128-bit wide vector of sixteen packed  | 
Functions
| __DMB⚠ | [ Experimental] [ ARM and ] mclassData Memory Barrier | 
| __DSB⚠ | [ Experimental] [ ARM and ] mclassData Synchronization Barrier | 
| __ISB⚠ | [ Experimental] [ ARM and ] mclassInstruction Synchronization Barrier | 
| __NOP⚠ | [ Experimental] [ ARM and ] mclassNo Operation | 
| __SEV⚠ | [ Experimental] [ ARM and ] mclassSend Event | 
| __WFE⚠ | [ Experimental] [ ARM and ] mclassWait For Event | 
| __WFI⚠ | [ Experimental] [ ARM and ] mclassWait For Interrupt | 
| __disable_fault_irq⚠ | [ Experimental] [ ARM and ] mclassDisable FIQ | 
| __disable_irq⚠ | [ Experimental] [ ARM and ] mclassDisable IRQ Interrupts | 
| __enable_fault_irq⚠ | [ Experimental] [ ARM and ] mclassEnable FIQ | 
| __enable_irq⚠ | [ Experimental] [ ARM and ] mclassEnable IRQ Interrupts | 
| __get_APSR⚠ | [ Experimental] [ ARM and ] mclassGet APSR Register | 
| __get_BASEPRI⚠ | [ Experimental] [ ARM and ] mclassGet Base Priority | 
| __get_CONTROL⚠ | [ Experimental] [ ARM and ] mclassGet Control Register | 
| __get_FAULTMASK⚠ | [ Experimental] [ ARM and ] mclassGet Fault Mask | 
| __get_IPSR⚠ | [ Experimental] [ ARM and ] mclassGet IPSR Register | 
| __get_MSP⚠ | [ Experimental] [ ARM and ] mclassGet Main Stack Pointer | 
| __get_PRIMASK⚠ | [ Experimental] [ ARM and ] mclassGet Priority Mask | 
| __get_PSP⚠ | [ Experimental] [ ARM and ] mclassGet Process Stack Pointer | 
| __get_xPSR⚠ | [ Experimental] [ ARM and ] mclassGet xPSR Register | 
| __set_BASEPRI⚠ | [ Experimental] [ ARM and ] mclassSet Base Priority | 
| __set_BASEPRI_MAX⚠ | [ Experimental] [ ARM and ] mclassSet Base Priority with condition | 
| __set_CONTROL⚠ | [ Experimental] [ ARM and ] mclassSet Control Register | 
| __set_FAULTMASK⚠ | [ Experimental] [ ARM and ] mclassSet Fault Mask | 
| __set_MSP⚠ | [ Experimental] [ ARM and ] mclassSet Main Stack Pointer | 
| __set_PRIMASK⚠ | [ Experimental] [ ARM and ] mclassSet Priority Mask | 
| __set_PSP⚠ | [ Experimental] [ ARM and ] mclassSet Process Stack Pointer | 
| _clz_u8⚠ | [ Experimental] [ ARM and ] v7Count Leading Zeros. | 
| _clz_u16⚠ | [ Experimental] [ ARM and ] v7Count Leading Zeros. | 
| _clz_u32⚠ | [ Experimental] [ ARM and ] v7Count Leading Zeros. | 
| _rbit_u32⚠ | [ Experimental] [ ARM and ] v7Reverse the bit order. | 
| _rev_u16⚠ | [ Experimental] [ ARM] Reverse the order of the bytes. | 
| _rev_u32⚠ | [ Experimental] [ ARM] Reverse the order of the bytes. | 
| qadd⚠ | [ Experimental] [ ARM] Signed saturating addition | 
| qadd8⚠ | [ Experimental] [ ARM] Saturating four 8-bit integer additions | 
| qadd16⚠ | [ Experimental] [ ARM] Saturating two 16-bit integer additions | 
| qasx⚠ | [ Experimental] [ ARM] Returns the 16-bit signed saturated equivalent of | 
| qsax⚠ | [ Experimental] [ ARM] Returns the 16-bit signed saturated equivalent of | 
| qsub⚠ | [ Experimental] [ ARM] Signed saturating subtraction | 
| qsub8⚠ | [ Experimental] [ ARM] Saturating two 8-bit integer subtraction | 
| qsub16⚠ | [ Experimental] [ ARM] Saturating two 16-bit integer subtraction | 
| sadd8⚠ | [ Experimental] [ ARM] Returns the 8-bit signed saturated equivalent of | 
| sadd16⚠ | [ Experimental] [ ARM] Returns the 16-bit signed saturated equivalent of | 
| sasx⚠ | [ Experimental] [ ARM] Returns the 16-bit signed equivalent of | 
| sel⚠ | [ Experimental] [ ARM] Returns the equivalent of | 
| shadd8⚠ | [ Experimental] [ ARM] Signed halving parallel byte-wise addition. | 
| shadd16⚠ | [ Experimental] [ ARM] Signed halving parallel halfword-wise addition. | 
| shsub8⚠ | [ Experimental] [ ARM] Signed halving parallel byte-wise subtraction. | 
| shsub16⚠ | [ Experimental] [ ARM] Signed halving parallel halfword-wise subtraction. | 
| smuad⚠ | [ Experimental] [ ARM] Signed Dual Multiply Add. | 
| smuadx⚠ | [ Experimental] [ ARM] Signed Dual Multiply Add Reversed. | 
| smusd⚠ | [ Experimental] [ ARM] Signed Dual Multiply Subtract. | 
| smusdx⚠ | [ Experimental] [ ARM] Signed Dual Multiply Subtract Reversed. | 
| vadd_f32⚠ | [ Experimental] [ neonandv7and ARMVector add. | 
| vadd_s8⚠ | [ Experimental] [ neonandv7and ARMVector add. | 
| vadd_s16⚠ | [ Experimental] [ neonandv7and ARMVector add. | 
| vadd_s32⚠ | [ Experimental] [ neonandv7and ARMVector add. | 
| vadd_u8⚠ | [ Experimental] [ neonandv7and ARMVector add. | 
| vadd_u16⚠ | [ Experimental] [ neonandv7and ARMVector add. | 
| vadd_u32⚠ | [ Experimental] [ neonandv7and ARMVector add. | 
| vaddl_s8⚠ | [ Experimental] [ neonandv7and ARMVector long add. | 
| vaddl_s16⚠ | [ Experimental] [ neonandv7and ARMVector long add. | 
| vaddl_s32⚠ | [ Experimental] [ neonandv7and ARMVector long add. | 
| vaddl_u8⚠ | [ Experimental] [ neonandv7and ARMVector long add. | 
| vaddl_u16⚠ | [ Experimental] [ neonandv7and ARMVector long add. | 
| vaddl_u32⚠ | [ Experimental] [ neonandv7and ARMVector long add. | 
| vaddq_f32⚠ | [ Experimental] [ neonandv7and ARMVector add. | 
| vaddq_s8⚠ | [ Experimental] [ neonandv7and ARMVector add. | 
| vaddq_s16⚠ | [ Experimental] [ neonandv7and ARMVector add. | 
| vaddq_s32⚠ | [ Experimental] [ neonandv7and ARMVector add. | 
| vaddq_s64⚠ | [ Experimental] [ neonandv7and ARMVector add. | 
| vaddq_u8⚠ | [ Experimental] [ neonandv7and ARMVector add. | 
| vaddq_u16⚠ | [ Experimental] [ neonandv7and ARMVector add. | 
| vaddq_u32⚠ | [ Experimental] [ neonandv7and ARMVector add. | 
| vaddq_u64⚠ | [ Experimental] [ neonandv7and ARMVector add. | 
| vmovl_s8⚠ | [ Experimental] [ neonandv7and ARMVector long move. | 
| vmovl_s16⚠ | [ Experimental] [ neonandv7and ARMVector long move. | 
| vmovl_s32⚠ | [ Experimental] [ neonandv7and ARMVector long move. | 
| vmovl_u8⚠ | [ Experimental] [ neonandv7and ARMVector long move. | 
| vmovl_u16⚠ | [ Experimental] [ neonandv7and ARMVector long move. | 
| vmovl_u32⚠ | [ Experimental] [ neonandv7and ARMVector long move. | 
| vmovn_s16⚠ | [ Experimental] [ neonandv7and ARMVector narrow integer. | 
| vmovn_s32⚠ | [ Experimental] [ neonandv7and ARMVector narrow integer. | 
| vmovn_s64⚠ | [ Experimental] [ neonandv7and ARMVector narrow integer. | 
| vmovn_u16⚠ | [ Experimental] [ neonandv7and ARMVector narrow integer. | 
| vmovn_u32⚠ | [ Experimental] [ neonandv7and ARMVector narrow integer. | 
| vmovn_u64⚠ | [ Experimental] [ neonandv7and ARMVector narrow integer. | 
| vpmax_f32⚠ | [ Experimental] [ neonandv7and ARMFolding maximum of adjacent pairs | 
| vpmax_s8⚠ | [ Experimental] [ neonandv7and ARMFolding maximum of adjacent pairs | 
| vpmax_s16⚠ | [ Experimental] [ neonandv7and ARMFolding maximum of adjacent pairs | 
| vpmax_s32⚠ | [ Experimental] [ neonandv7and ARMFolding maximum of adjacent pairs | 
| vpmax_u8⚠ | [ Experimental] [ neonandv7and ARMFolding maximum of adjacent pairs | 
| vpmax_u16⚠ | [ Experimental] [ neonandv7and ARMFolding maximum of adjacent pairs | 
| vpmax_u32⚠ | [ Experimental] [ neonandv7and ARMFolding maximum of adjacent pairs | 
| vpmin_f32⚠ | [ Experimental] [ neonandv7and ARMFolding minimum of adjacent pairs | 
| vpmin_s8⚠ | [ Experimental] [ neonandv7and ARMFolding minimum of adjacent pairs | 
| vpmin_s16⚠ | [ Experimental] [ neonandv7and ARMFolding minimum of adjacent pairs | 
| vpmin_s32⚠ | [ Experimental] [ neonandv7and ARMFolding minimum of adjacent pairs | 
| vpmin_u8⚠ | [ Experimental] [ neonandv7and ARMFolding minimum of adjacent pairs | 
| vpmin_u16⚠ | [ Experimental] [ neonandv7and ARMFolding minimum of adjacent pairs | 
| vpmin_u32⚠ | [ Experimental] [ neonandv7and ARMFolding minimum of adjacent pairs | 
| vrsqrte_f32⚠ | [ Experimental] [ ARM and ] neonReciprocal square-root estimate. |