p3.16xlarge
The p3.16xlarge instance is in the gpu instance family with 64 vCPUs, 488.0 GiB of memory and 25 Gibps of bandwidth starting at $24.48 per hour.
Compute |
Value |
vCPUs |
64 |
Memory (GiB) |
488.0 |
Memory per vCPU (GiB) |
7.62 |
Physical Processor |
Intel Xeon E5-2686 v4 (Broadwell) |
Clock Speed (GHz) |
2.3 |
CPU Architecture |
x86_64 |
GPU |
8 |
GPU Architecture |
NVIDIA Tesla V100 |
Video Memory (GiB) |
128 |
GPU Compute Capability (?) |
7.0 |
FPGA |
0 |
Networking |
Value |
Network Performance (Gibps) |
25 |
Enhanced Networking |
True |
IPV6 |
True |
Placement Group (?) |
True |
Storage |
Value |
EBS Optimized |
True |
Max Bandwidth (Mbps) on (EBS) |
14000 |
Max Throughput (MB/s) on EBS |
1750.0 |
Max I/O Operations/second (IOPS) |
80000 |
Baseline Bandwidth (Mbps) on (EBS) |
14000 |
Baseline Throughput (MB/s) on EBS |
1750.0 |
Baseline I/O Operations/second (IOPS) |
80000 |
Amazon |
Value |
Generation |
current |
Instance Type |
p3.16xlarge |
Family |
GPU instance |
Name |
P3 High Performance GPU 16xlarge |
Elastic Map Reduce (EMR) |
True |