p3.2xlarge
The p3.2xlarge instance is in the gpu instance family with 8 vCPUs, 61.0 GiB of memory and up to 10 Gibps of bandwidth starting at $3.06 per hour.
Compute |
Value |
vCPUs |
8 |
Memory (GiB) |
61.0 |
Memory per vCPU (GiB) |
7.62 |
Physical Processor |
Intel Xeon E5-2686 v4 (Broadwell) |
Clock Speed (GHz) |
2.3 |
CPU Architecture |
x86_64 |
GPU |
1 |
GPU Architecture |
NVIDIA Tesla V100 |
Video Memory (GiB) |
16 |
GPU Compute Capability (?) |
7.0 |
FPGA |
0 |
Networking |
Value |
Network Performance (Gibps) |
Up to 10 |
Enhanced Networking |
True |
IPV6 |
True |
Placement Group (?) |
True |
Storage |
Value |
EBS Optimized |
True |
Max Bandwidth (Mbps) on (EBS) |
1750 |
Max Throughput (MB/s) on EBS |
218.75 |
Max I/O Operations/second (IOPS) |
10000 |
Baseline Bandwidth (Mbps) on (EBS) |
1750 |
Baseline Throughput (MB/s) on EBS |
218.75 |
Baseline I/O Operations/second (IOPS) |
10000 |
Amazon |
Value |
Generation |
current |
Instance Type |
p3.2xlarge |
Family |
GPU instance |
Name |
P3 High Performance GPU Double Extra Large |
Elastic Map Reduce (EMR) |
True |