[−][src]Module core::arch::aarch64
Platform-specific intrinsics for the aarch64 platform.
See the module documentation for more details.
Structs
| float32x2_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of two packed |
| float32x4_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of four packed |
| float64x1_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of one packed |
| float64x2_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of two packed |
| int16x2_t | [ Experimental ] [AArch64 ]ARM-specific 32-bit wide vector of two packed |
| int16x4_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of four packed |
| int16x8_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of eight packed |
| int32x2_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of two packed |
| int32x4_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of four packed |
| int64x1_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of one packed |
| int64x2_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of two packed |
| int8x4_t | [ Experimental ] [AArch64 ]ARM-specific 32-bit wide vector of four packed |
| int8x8_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of eight packed |
| int8x16_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of sixteen packed |
| int8x16x2_t | [ Experimental ] [AArch64 ]ARM-specific type containing two |
| int8x16x3_t | [ Experimental ] [AArch64 ]ARM-specific type containing three |
| int8x16x4_t | [ Experimental ] [AArch64 ]ARM-specific type containing four |
| int8x8x2_t | [ Experimental ] [AArch64 ]ARM-specific type containing two |
| int8x8x3_t | [ Experimental ] [AArch64 ]ARM-specific type containing three |
| int8x8x4_t | [ Experimental ] [AArch64 ]ARM-specific type containing four |
| poly16x4_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of four packed |
| poly16x8_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of eight packed |
| poly64x1_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of one packed |
| poly64x2_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of two packed |
| poly8x8_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide polynomial vector of eight packed |
| poly8x16_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of sixteen packed |
| poly8x16x2_t | [ Experimental ] [AArch64 ]ARM-specific type containing two |
| poly8x16x3_t | [ Experimental ] [AArch64 ]ARM-specific type containing three |
| poly8x16x4_t | [ Experimental ] [AArch64 ]ARM-specific type containing four |
| poly8x8x2_t | [ Experimental ] [AArch64 ]ARM-specific type containing two |
| poly8x8x3_t | [ Experimental ] [AArch64 ]ARM-specific type containing three |
| poly8x8x4_t | [ Experimental ] [AArch64 ]ARM-specific type containing four |
| uint16x2_t | [ Experimental ] [AArch64 ]ARM-specific 32-bit wide vector of two packed |
| uint16x4_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of four packed |
| uint16x8_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of eight packed |
| uint32x2_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of two packed |
| uint32x4_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of four packed |
| uint64x1_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of one packed |
| uint64x2_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of two packed |
| uint8x4_t | [ Experimental ] [AArch64 ]ARM-specific 32-bit wide vector of four packed |
| uint8x8_t | [ Experimental ] [AArch64 ]ARM-specific 64-bit wide vector of eight packed |
| uint8x16_t | [ Experimental ] [AArch64 ]ARM-specific 128-bit wide vector of sixteen packed |
| uint8x16x2_t | [ Experimental ] [AArch64 ]ARM-specific type containing two |
| uint8x16x3_t | [ Experimental ] [AArch64 ]ARM-specific type containing three |
| uint8x16x4_t | [ Experimental ] [AArch64 ]ARM-specific type containing four |
| uint8x8x2_t | [ Experimental ] [AArch64 ]ARM-specific type containing two |
| uint8x8x3_t | [ Experimental ] [AArch64 ]ARM-specific type containing three |
| uint8x8x4_t | [ Experimental ] [AArch64 ]ARM-specific type containing four |
Functions
| __DMB⚠ | [ Experimental ] [AArch64 and ]mclassData Memory Barrier |
| __DSB⚠ | [ Experimental ] [AArch64 and ]mclassData Synchronization Barrier |
| __ISB⚠ | [ Experimental ] [AArch64 and ]mclassInstruction Synchronization Barrier |
| __NOP⚠ | [ Experimental ] [AArch64 and ]mclassNo Operation |
| __SEV⚠ | [ Experimental ] [AArch64 and ]mclassSend Event |
| __WFE⚠ | [ Experimental ] [AArch64 and ]mclassWait For Event |
| __WFI⚠ | [ Experimental ] [AArch64 and ]mclassWait For Interrupt |
| __disable_fault_irq⚠ | [ Experimental ] [AArch64 and ]mclassDisable FIQ |
| __disable_irq⚠ | [ Experimental ] [AArch64 and ]mclassDisable IRQ Interrupts |
| __enable_fault_irq⚠ | [ Experimental ] [AArch64 and ]mclassEnable FIQ |
| __enable_irq⚠ | [ Experimental ] [AArch64 and ]mclassEnable IRQ Interrupts |
| __get_APSR⚠ | [ Experimental ] [AArch64 and ]mclassGet APSR Register |
| __get_BASEPRI⚠ | [ Experimental ] [AArch64 and ]mclassGet Base Priority |
| __get_CONTROL⚠ | [ Experimental ] [AArch64 and ]mclassGet Control Register |
| __get_FAULTMASK⚠ | [ Experimental ] [AArch64 and ]mclassGet Fault Mask |
| __get_IPSR⚠ | [ Experimental ] [AArch64 and ]mclassGet IPSR Register |
| __get_MSP⚠ | [ Experimental ] [AArch64 and ]mclassGet Main Stack Pointer |
| __get_PRIMASK⚠ | [ Experimental ] [AArch64 and ]mclassGet Priority Mask |
| __get_PSP⚠ | [ Experimental ] [AArch64 and ]mclassGet Process Stack Pointer |
| __get_xPSR⚠ | [ Experimental ] [AArch64 and ]mclassGet xPSR Register |
| __set_BASEPRI⚠ | [ Experimental ] [AArch64 and ]mclassSet Base Priority |
| __set_BASEPRI_MAX⚠ | [ Experimental ] [AArch64 and ]mclassSet Base Priority with condition |
| __set_CONTROL⚠ | [ Experimental ] [AArch64 and ]mclassSet Control Register |
| __set_FAULTMASK⚠ | [ Experimental ] [AArch64 and ]mclassSet Fault Mask |
| __set_MSP⚠ | [ Experimental ] [AArch64 and ]mclassSet Main Stack Pointer |
| __set_PRIMASK⚠ | [ Experimental ] [AArch64 and ]mclassSet Priority Mask |
| __set_PSP⚠ | [ Experimental ] [AArch64 and ]mclassSet Process Stack Pointer |
| _cls_u32⚠ | [ Experimental ] [AArch64 ]Counts the leading most significant bits set. |
| _cls_u64⚠ | [ Experimental ] [AArch64 ]Counts the leading most significant bits set. |
| _clz_u64⚠ | [ Experimental ] [AArch64 ]Count Leading Zeros. |
| _rbit_u64⚠ | [ Experimental ] [AArch64 ]Reverse the bit order. |
| _rev_u16⚠ | [ Experimental ] [AArch64 ]Reverse the order of the bytes. |
| _rev_u32⚠ | [ Experimental ] [AArch64 ]Reverse the order of the bytes. |
| _rev_u64⚠ | [ Experimental ] [AArch64 ]Reverse the order of the bytes. |
| qadd⚠ | [ Experimental ] [AArch64 ]Signed saturating addition |
| qadd8⚠ | [ Experimental ] [AArch64 ]Saturating four 8-bit integer additions |
| qadd16⚠ | [ Experimental ] [AArch64 ]Saturating two 16-bit integer additions |
| qasx⚠ | [ Experimental ] [AArch64 ]Returns the 16-bit signed saturated equivalent of |
| qsax⚠ | [ Experimental ] [AArch64 ]Returns the 16-bit signed saturated equivalent of |
| qsub⚠ | [ Experimental ] [AArch64 ]Signed saturating subtraction |
| qsub8⚠ | [ Experimental ] [AArch64 ]Saturating two 8-bit integer subtraction |
| qsub16⚠ | [ Experimental ] [AArch64 ]Saturating two 16-bit integer subtraction |
| sadd8⚠ | [ Experimental ] [AArch64 ]Returns the 8-bit signed saturated equivalent of |
| sadd16⚠ | [ Experimental ] [AArch64 ]Returns the 16-bit signed saturated equivalent of |
| sasx⚠ | [ Experimental ] [AArch64 ]Returns the 16-bit signed equivalent of |
| sel⚠ | [ Experimental ] [AArch64 ]Select bytes from each operand according to APSR GE flags |
| shadd8⚠ | [ Experimental ] [AArch64 ]Signed halving parallel byte-wise addition. |
| shadd16⚠ | [ Experimental ] [AArch64 ]Signed halving parallel halfword-wise addition. |
| shsub8⚠ | [ Experimental ] [AArch64 ]Signed halving parallel byte-wise subtraction. |
| shsub16⚠ | [ Experimental ] [AArch64 ]Signed halving parallel halfword-wise subtraction. |
| smlad⚠ | [ Experimental ] [AArch64 ]Dual 16-bit Signed Multiply with Addition of products and 32-bit accumulation. |
| smlsd⚠ | [ Experimental ] [AArch64 ]Dual 16-bit Signed Multiply with Subtraction of products and 32-bit accumulation and overflow detection. |
| smuad⚠ | [ Experimental ] [AArch64 ]Signed Dual Multiply Add. |
| smuadx⚠ | [ Experimental ] [AArch64 ]Signed Dual Multiply Add Reversed. |
| smusd⚠ | [ Experimental ] [AArch64 ]Signed Dual Multiply Subtract. |
| smusdx⚠ | [ Experimental ] [AArch64 ]Signed Dual Multiply Subtract Reversed. |
| usad8⚠ | [ Experimental ] [AArch64 ]Sum of 8-bit absolute differences. |
| usad8a⚠ | [ Experimental ] [AArch64 ]Sum of 8-bit absolute differences and constant. |
| vadd_f32⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
| vadd_f64⚠ | [ Experimental ] [AArch64 and ]neonVector add. |
| vadd_s8⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
| vadd_s16⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
| vadd_s32⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
| vadd_u8⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
| vadd_u16⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
| vadd_u32⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
| vaddd_s64⚠ | [ Experimental ] [AArch64 and ]neonVector add. |
| vaddd_u64⚠ | [ Experimental ] [AArch64 and ]neonVector add. |
| vaddl_s8⚠ | [ Experimental ] [neon and v7 and AArch64Vector long add. |
| vaddl_s16⚠ | [ Experimental ] [neon and v7 and AArch64Vector long add. |
| vaddl_s32⚠ | [ Experimental ] [neon and v7 and AArch64Vector long add. |
| vaddl_u8⚠ | [ Experimental ] [neon and v7 and AArch64Vector long add. |
| vaddl_u16⚠ | [ Experimental ] [neon and v7 and AArch64Vector long add. |
| vaddl_u32⚠ | [ Experimental ] [neon and v7 and AArch64Vector long add. |
| vaddq_f32⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
| vaddq_f64⚠ | [ Experimental ] [AArch64 and ]neonVector add. |
| vaddq_s8⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
| vaddq_s16⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
| vaddq_s32⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
| vaddq_s64⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
| vaddq_u8⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
| vaddq_u16⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
| vaddq_u32⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
| vaddq_u64⚠ | [ Experimental ] [neon and v7 and AArch64Vector add. |
| vaesdq_u8⚠ | [ Experimental ] [AArch64 and ]cryptoAES single round decryption. |
| vaeseq_u8⚠ | [ Experimental ] [AArch64 and ]cryptoAES single round encryption. |
| vaesimcq_u8⚠ | [ Experimental ] [AArch64 and ]cryptoAES inverse mix columns. |
| vaesmcq_u8⚠ | [ Experimental ] [AArch64 and ]cryptoAES mix columns. |
| vcombine_f32⚠ | [ Experimental ] [AArch64 and ]neonVector combine |
| vcombine_f64⚠ | [ Experimental ] [AArch64 and ]neonVector combine |
| vcombine_p8⚠ | [ Experimental ] [AArch64 and ]neonVector combine |
| vcombine_p16⚠ | [ Experimental ] [AArch64 and ]neonVector combine |
| vcombine_p64⚠ | [ Experimental ] [AArch64 and ]neonVector combine |
| vcombine_s8⚠ | [ Experimental ] [AArch64 and ]neonVector combine |
| vcombine_s16⚠ | [ Experimental ] [AArch64 and ]neonVector combine |
| vcombine_s32⚠ | [ Experimental ] [AArch64 and ]neonVector combine |
| vcombine_s64⚠ | [ Experimental ] [AArch64 and ]neonVector combine |
| vcombine_u8⚠ | [ Experimental ] [AArch64 and ]neonVector combine |
| vcombine_u16⚠ | [ Experimental ] [AArch64 and ]neonVector combine |
| vcombine_u32⚠ | [ Experimental ] [AArch64 and ]neonVector combine |
| vcombine_u64⚠ | [ Experimental ] [AArch64 and ]neonVector combine |
| vmaxv_f32⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector max. |
| vmaxv_s8⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector max. |
| vmaxv_s16⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector max. |
| vmaxv_s32⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector max. |
| vmaxv_u8⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector max. |
| vmaxv_u16⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector max. |
| vmaxv_u32⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector max. |
| vmaxvq_f32⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector max. |
| vmaxvq_f64⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector max. |
| vmaxvq_s8⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector max. |
| vmaxvq_s16⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector max. |
| vmaxvq_s32⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector max. |
| vmaxvq_u8⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector max. |
| vmaxvq_u16⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector max. |
| vmaxvq_u32⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector max. |
| vminv_f32⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector min. |
| vminv_s8⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector min. |
| vminv_s16⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector min. |
| vminv_s32⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector min. |
| vminv_u8⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector min. |
| vminv_u16⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector min. |
| vminv_u32⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector min. |
| vminvq_f32⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector min. |
| vminvq_f64⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector min. |
| vminvq_s8⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector min. |
| vminvq_s16⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector min. |
| vminvq_s32⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector min. |
| vminvq_u8⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector min. |
| vminvq_u16⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector min. |
| vminvq_u32⚠ | [ Experimental ] [AArch64 and ]neonHorizontal vector min. |
| vmovl_s8⚠ | [ Experimental ] [neon and v7 and AArch64Vector long move. |
| vmovl_s16⚠ | [ Experimental ] [neon and v7 and AArch64Vector long move. |
| vmovl_s32⚠ | [ Experimental ] [neon and v7 and AArch64Vector long move. |
| vmovl_u8⚠ | [ Experimental ] [neon and v7 and AArch64Vector long move. |
| vmovl_u16⚠ | [ Experimental ] [neon and v7 and AArch64Vector long move. |
| vmovl_u32⚠ | [ Experimental ] [neon and v7 and AArch64Vector long move. |
| vmovn_s16⚠ | [ Experimental ] [neon and v7 and AArch64Vector narrow integer. |
| vmovn_s32⚠ | [ Experimental ] [neon and v7 and AArch64Vector narrow integer. |
| vmovn_s64⚠ | [ Experimental ] [neon and v7 and AArch64Vector narrow integer. |
| vmovn_u16⚠ | [ Experimental ] [neon and v7 and AArch64Vector narrow integer. |
| vmovn_u32⚠ | [ Experimental ] [neon and v7 and AArch64Vector narrow integer. |
| vmovn_u64⚠ | [ Experimental ] [neon and v7 and AArch64Vector narrow integer. |
| vpmax_f32⚠ | [ Experimental ] [neon and v7 and AArch64Folding maximum of adjacent pairs |
| vpmax_s8⚠ | [ Experimental ] [neon and v7 and AArch64Folding maximum of adjacent pairs |
| vpmax_s16⚠ | [ Experimental ] [neon and v7 and AArch64Folding maximum of adjacent pairs |
| vpmax_s32⚠ | [ Experimental ] [neon and v7 and AArch64Folding maximum of adjacent pairs |
| vpmax_u8⚠ | [ Experimental ] [neon and v7 and AArch64Folding maximum of adjacent pairs |
| vpmax_u16⚠ | [ Experimental ] [neon and v7 and AArch64Folding maximum of adjacent pairs |
| vpmax_u32⚠ | [ Experimental ] [neon and v7 and AArch64Folding maximum of adjacent pairs |
| vpmaxq_f32⚠ | [ Experimental ] [AArch64 and ]neonFolding maximum of adjacent pairs |
| vpmaxq_f64⚠ | [ Experimental ] [AArch64 and ]neonFolding maximum of adjacent pairs |
| vpmaxq_s8⚠ | [ Experimental ] [AArch64 and ]neonFolding maximum of adjacent pairs |
| vpmaxq_s16⚠ | [ Experimental ] [AArch64 and ]neonFolding maximum of adjacent pairs |
| vpmaxq_s32⚠ | [ Experimental ] [AArch64 and ]neonFolding maximum of adjacent pairs |
| vpmaxq_u8⚠ | [ Experimental ] [AArch64 and ]neonFolding maximum of adjacent pairs |
| vpmaxq_u16⚠ | [ Experimental ] [AArch64 and ]neonFolding maximum of adjacent pairs |
| vpmaxq_u32⚠ | [ Experimental ] [AArch64 and ]neonFolding maximum of adjacent pairs |
| vpmin_f32⚠ | [ Experimental ] [neon and v7 and AArch64Folding minimum of adjacent pairs |
| vpmin_s8⚠ | [ Experimental ] [neon and v7 and AArch64Folding minimum of adjacent pairs |
| vpmin_s16⚠ | [ Experimental ] [neon and v7 and AArch64Folding minimum of adjacent pairs |
| vpmin_s32⚠ | [ Experimental ] [neon and v7 and AArch64Folding minimum of adjacent pairs |
| vpmin_u8⚠ | [ Experimental ] [neon and v7 and AArch64Folding minimum of adjacent pairs |
| vpmin_u16⚠ | [ Experimental ] [neon and v7 and AArch64Folding minimum of adjacent pairs |
| vpmin_u32⚠ | [ Experimental ] [neon and v7 and AArch64Folding minimum of adjacent pairs |
| vpminq_f32⚠ | [ Experimental ] [AArch64 and ]neonFolding minimum of adjacent pairs |
| vpminq_f64⚠ | [ Experimental ] [AArch64 and ]neonFolding minimum of adjacent pairs |
| vpminq_s8⚠ | [ Experimental ] [AArch64 and ]neonFolding minimum of adjacent pairs |
| vpminq_s16⚠ | [ Experimental ] [AArch64 and ]neonFolding minimum of adjacent pairs |
| vpminq_s32⚠ | [ Experimental ] [AArch64 and ]neonFolding minimum of adjacent pairs |
| vpminq_u8⚠ | [ Experimental ] [AArch64 and ]neonFolding minimum of adjacent pairs |
| vpminq_u16⚠ | [ Experimental ] [AArch64 and ]neonFolding minimum of adjacent pairs |
| vpminq_u32⚠ | [ Experimental ] [AArch64 and ]neonFolding minimum of adjacent pairs |
| vqtbl1_p8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl1_s8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl1_u8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl1q_p8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl1q_s8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl1q_u8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl2_p8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl2_s8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl2_u8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl2q_p8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl2q_s8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl2q_u8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl3_p8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl3_s8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl3_u8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl3q_p8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl3q_s8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl3q_u8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl4_p8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl4_s8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl4_u8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl4q_p8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl4q_s8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbl4q_u8⚠ | [ Experimental ] [AArch64 and ]neonTable look-up |
| vqtbx1_p8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx1_s8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx1_u8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx1q_p8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx1q_s8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx1q_u8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx2_p8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx2_s8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx2_u8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx2q_p8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx2q_s8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx2q_u8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx3_p8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx3_s8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx3_u8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx3q_p8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx3q_s8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx3q_u8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx4_p8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx4_s8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx4_u8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx4q_p8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx4q_s8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vqtbx4q_u8⚠ | [ Experimental ] [AArch64 and ]neonExtended table look-up |
| vrsqrte_f32⚠ | [ Experimental ] [AArch64 and ]neonReciprocal square-root estimate. |
| vsha1cq_u32⚠ | [ Experimental ] [AArch64 and ]cryptoSHA1 hash update accelerator, choose. |
| vsha1h_u32⚠ | [ Experimental ] [AArch64 and ]cryptoSHA1 fixed rotate. |
| vsha1mq_u32⚠ | [ Experimental ] [AArch64 and ]cryptoSHA1 hash update accelerator, majority. |
| vsha1pq_u32⚠ | [ Experimental ] [AArch64 and ]cryptoSHA1 hash update accelerator, parity. |
| vsha1su0q_u32⚠ | [ Experimental ] [AArch64 and ]cryptoSHA1 schedule update accelerator, first part. |
| vsha1su1q_u32⚠ | [ Experimental ] [AArch64 and ]cryptoSHA1 schedule update accelerator, second part. |
| vsha256h2q_u32⚠ | [ Experimental ] [AArch64 and ]cryptoSHA256 hash update accelerator, upper part. |
| vsha256hq_u32⚠ | [ Experimental ] [AArch64 and ]cryptoSHA256 hash update accelerator. |
| vsha256su0q_u32⚠ | [ Experimental ] [AArch64 and ]cryptoSHA256 schedule update accelerator, first part. |
| vsha256su1q_u32⚠ | [ Experimental ] [AArch64 and ]cryptoSHA256 schedule update accelerator, second part. |
| vtbl1_p8⚠ | [ Experimental ] [AArch64 and ]neon,v7Table look-up |
| vtbl1_s8⚠ | [ Experimental ] [AArch64 and ]neon,v7Table look-up |
| vtbl1_u8⚠ | [ Experimental ] [AArch64 and ]neon,v7Table look-up |
| vtbl2_p8⚠ | [ Experimental ] [AArch64 and ]neon,v7Table look-up |
| vtbl2_s8⚠ | [ Experimental ] [AArch64 and ]neon,v7Table look-up |
| vtbl2_u8⚠ | [ Experimental ] [AArch64 and ]neon,v7Table look-up |
| vtbl3_p8⚠ | [ Experimental ] [AArch64 and ]neon,v7Table look-up |
| vtbl3_s8⚠ | [ Experimental ] [AArch64 and ]neon,v7Table look-up |
| vtbl3_u8⚠ | [ Experimental ] [AArch64 and ]neon,v7Table look-up |
| vtbl4_p8⚠ | [ Experimental ] [AArch64 and ]neon,v7Table look-up |
| vtbl4_s8⚠ | [ Experimental ] [AArch64 and ]neon,v7Table look-up |
| vtbl4_u8⚠ | [ Experimental ] [AArch64 and ]neon,v7Table look-up |
| vtbx1_p8⚠ | [ Experimental ] [AArch64 and ]neon,v7Extended table look-up |
| vtbx1_s8⚠ | [ Experimental ] [AArch64 and ]neon,v7Extended table look-up |
| vtbx1_u8⚠ | [ Experimental ] [AArch64 and ]neon,v7Extended table look-up |
| vtbx2_p8⚠ | [ Experimental ] [AArch64 and ]neon,v7Extended table look-up |
| vtbx2_s8⚠ | [ Experimental ] [AArch64 and ]neon,v7Extended table look-up |
| vtbx2_u8⚠ | [ Experimental ] [AArch64 and ]neon,v7Extended table look-up |
| vtbx3_p8⚠ | [ Experimental ] [AArch64 and ]neon,v7Extended table look-up |
| vtbx3_s8⚠ | [ Experimental ] [AArch64 and ]neon,v7Extended table look-up |
| vtbx3_u8⚠ | [ Experimental ] [AArch64 and ]neon,v7Extended table look-up |
| vtbx4_p8⚠ | [ Experimental ] [AArch64 and ]neon,v7Extended table look-up |
| vtbx4_s8⚠ | [ Experimental ] [AArch64 and ]neon,v7Extended table look-up |
| vtbx4_u8⚠ | [ Experimental ] [AArch64 and ]neon,v7Extended table look-up |