Instance Details
| Compute | Value |
|---|---|
| vCPUs | 32 |
| Memory (GiB) | 128 |
| Memory per vCPU (GiB) | 4 |
| Physical Processor | AMD EPYC 7R13 Processor |
| Clock Speed (GHz) | 2.95 GHz |
| CPU Architecture | x86_64 |
| GPU | 1 |
| GPU Average Wattage | 0 W |
| GPU Architecture | AWS Inferentia2 |
| Video Memory (GiB) | 32 |
| GPU Compute Capability ? | 0 |
| FPGA | 0 |
| ffmpeg FPS | 196 |
| CoreMark iterations/Second | 28557.149996 |
| NUMA Architecture | Value |
|---|---|
| Uses NUMA Architecture ? | Yes |
| NUMA Node Count | 2 |
| Max NUMA Distance | 12 |
| Cores per NUMA Node (Avg) | 8.0 |
| Threads per NUMA Node (Avg) | 16.0 |
| Memory per NUMA Node (Avg MB) | 63069 MB |
| L3 Cache per NUMA Node (Avg MB) | 32.0 MB |
| L3 Cache Shared | Yes |
| Networking | Value |
|---|---|
| Network Performance (Gibps) | up to 25 |
| Enhanced Networking | true |
| IPv6 | true |
| Placement Group ? |
| Storage | Value |
|---|---|
| EBS Optimized | true |
| Max Bandwidth (Mbps) on EBS | 10000 |
| Max Throughput (MB/s) on EBS | 1250 |
| Max I/O operations/second IOPS | 40000 |
| Baseline Bandwidth (Mbps) on EBS | 10000 |
| Baseline Throughput (MB/s) on EBS | 1250 |
| Baseline I/O operations/second IOPS | 40000 |
| Devices | 0 |
| Swap Partition | false |
| NVME Drive ? | false |
| Disk Space (GiB) | 0 |
| SSD | false |
| Initialize Storage | false |
| Amazon | Value |
|---|---|
| Generation | current |
| Instance Type | inf2.8xlarge |
| Family | Machine Learning ASIC Instances |
| Name | INF2 Eight Extra Large |
| Elastic Map Reduce EMR | false |



