Everything that’s digital and online—all the webpages, software, content, and hardware we use—can be distilled down into bits of data. At their simplest, these are the 1s and 0s that create the building blocks of data and code. In this article, we’ll explain how digital units apply when choosing enterprise data storage solutions—and how storage-specific bandwidth calculations are performed in 2025.
TL;DR: For storage admins: Calculate effective throughput using 8:1 bit-to-byte ratio. 100 Gbps = 12.5 GB/s theoretical, ~10 GB/s practical after protocol overhead.
What’s a Bit?
A bit (b) is the smallest unit of measurable digital information. A bit can have a value of either a “0” or a “1”—either “off” or “on,” respectively. Bits (binary code) are the building blocks of all computer code.
What’s a Byte?
A byte (B) is a group of 8 bits. A byte can have values ranging from “0” to “255,” so 256 values in total can be stored in a byte.
Bits vs. Bytes: What They Measure
Bits and bytes measure digital information and data. Like a mile or a kilometer measures distance, bits and bytes measure data. But if you drill down a bit more, you’ll find bits and bytes measure digital information differently.
- Bits: Speed of data transfer (e.g., Mbps)
- Bytes: Size of data or storage (e.g., MB)
A bit measures speed of data, while a byte measures size—namely, a file’s size or the capacity of storage used to store those files.
How to Measure Millions and Billions of Bits and Bytes
Because bits and bytes are the smallest units of measure for digital information, they’re not the most practical units of measure. Using the example above, you wouldn’t describe the distance between two cities in inches—you’d measure it in miles. The same principle applies with bits and bytes.
To measure millions of bits and bytes, we use larger units of measurement: megabits and megabytes. A megabit is equal to 1 million bits, and a megabyte is equal to 1 million bytes, or 8 megabits. (This becomes important later when we talk about converting between the two and why you’d want to do that.)
Megabit vs. Megabyte
Now that we’ve established what megabits and megabytes are, let’s look at what they measure and how they’re different.
Difference between Megabits and Megabytes
The difference between megabits and megabytes is what they measure. Both are units of digital information, but they measure different things:
- A megabit measures either the speed of data transfer or the size of digital storage. Whenever data is sent over a network, shared on a USB drive, or processed on a hard drive, that data is being sent in 1s and 0s, or bits. Then, those bits are read at a certain speed. That speed is measured in megabits per second (Mbps).
- A megabyte measures how much digital information is stored in a file or on a device. So, an image, video file, or audio file size will be measured in megabytes (MBs). For example, a software program on your laptop could take up 500MB of space.
While the two units measure different things, there may be instances where you’d want to convert between the two measurements, which we’ll get into next.
How Many MB Is 1Mbps?
One Mbps (megabits per second) is equal to ⅛ MB, or 0.125 MB, or megabytes per second (MBps).
To convert bits to bytes, simply divide the number of bits by 8. One-eighth of the byte is the amount of bits.
Why does this conversion matter? One example is when you want to understand storage capacity, or how long it takes to transfer files from a storage device. If an SSD notes its capacity in MBs, but the network speeds are measured in Mbps, converting to MBps can be a more consistent measurement.
Other instances where you may want to convert between MB and Mbps include:
- Understanding the maximum amount of data that can be transferred over a network in order to optimize network performance or network bandwidth
- Download times, by understanding how long it will take per file
Learn more about Everpure solutions
Megabits vs. Gigabits
The difference between gigabits and megabits is their capacity. A megabit is 1 million bits, and a gigabit is 1 billion bits.
What Do Megabits and Gigabits Measure?
A gigabit per second (Gbps) is a common unit of measure for high-speed internet and networks, and a gigabyte (GB) is a common unit of measuring storage capacity.
Examples of Gbps Connections and Uses
Gbps is exponentially more high speed than Mbps and is a typical standard for connection speeds in data centers. Because modern data centers transfer massive amounts of data per second—between applications, servers, and the cloud—speed is key.
Other connections or systems that typically need to transfer billions of bits per second can include:
- High-performance computing (HPC) grids. HPC grids carry out massive calculations and deal with large amounts of data processing, so they typically require Gbps data speeds. Without high-speed interconnectivity, teams working to mine insights or collaborate can lose months of productivity.
- Streaming video and gaming platforms. User experience suffers when platforms struggle with slow speeds that can lead to lagging and buffering for millions of users accessing them at the same time.
- Artificial intelligence (AI) applications and AI-powered image generation. Modern-day AI copilots need fast access to rapidly evolving learning models and massive data sets.
Data Storage vs. Data Transfer
The two, interlinked concepts that are most important to this comparison of megabits vs. megabytes are data storage and data transfer. Put simply, data storage is how data is secured and saved, but it’s of no value without data transfer, how those files and digital information are sent between storage and other sources. Together, both are critical aspects of good data management and crucial to getting maximum value from your data.
Data storage is measured in gigabytes, but today more so in terabytes (1 trillion bytes), and petabytes (1 quadrillion bytes). That will only continue to grow as data volumes grow worldwide. Understanding the building blocks (bytes) can help you understand the scale—onward to exabytes (1 quintillion bytes), zettabytes (1 sextillion bytes), and beyond.
Binary vs. Decimal in Storage
Why Your “1TB” SSD Shows 931GB
- Manufacturers: Decimal: 1 TB = 1,000,000,000,000 bytes
- OS: Binary: 1 TB = 1,099,511,627,776 bytes
Storage Capacity Calculations
- 1 TiB = 1,024 GiB = 1,099,511,627,776 bytes
- 1 TB = 1,000 GB = 1,000,000,000,000 bytes
Conversion Cheat Sheet
When evaluating storage performance, it’s important to understand how to convert between bits and bytes. Many vendors advertise network speeds in bits per second (e.g., Gbps), but storage throughput is typically measured in bytes per second (e.g., GB/s).
Here’s a quick cheat sheet that helps bridge that gap:
- 1 Mbps = 0.125 MBps
- 1 Gbps = 125 MBps (theoretical)
- 100 Gbps = 12.5 GB/s theoretical; ~10 GB/s practical with overhead
NVMe Throughput and Storage Bandwidth Calculator
Different storage protocols introduce varying levels of overhead that affect real-world throughput. This table compares the theoretical and practical bandwidth of common protocols like NVMe/TCP, iSCSI, Fibre Channel, and NVMe-oF over RDMA. Whether you’re sizing infrastructure or diagnosing performance bottlenecks, understanding these nuances can help you make better decisions about bandwidth, latency, and overall system efficiency.
| Protocol | Theoretical Speed | Practical Throughput | Overhead Factors |
| NVMe/TCP 100G | 12.5 GB/s | ~10 GB/s | TCP/IP headers, CPU processing |
| iSCSI 25G | 3.125 GB/s | ~2.8 GB/s | iSCSI + TCP/IP overhead |
| Fibre Channel 32G | 4 GB/s | ~3.6 GB/s | FC frame overhead |
| NVMe-oF/RDMA 100G | 12.5 GB/s | ~11.5 GB/s | Minimal protocol overhead |
Storage Performance Beyond Bandwidth
Raw bandwidth numbers don’t tell the whole story. In real-world environments, performance is shaped by the nature of the workload—whether it’s random reads, sequential writes, or a mix of both. Understanding how IOPS, latency, and queue depth interact is key to optimizing performance. Here’s a quick breakdown of the trade-offs between IOPS and bandwidth, plus quick math examples to help storage administrators estimate performance under varying conditions.
IOPS vs. Bandwidth Trade-offs
- Random 4K Reads: IOPS matter more
- Sequential Files: Bandwidth matters
- Mixed Workloads: Latency and queue depth become critical
Real-World Storage Math
- 100,000 IOPS × 4KB = 400 MB/s
- 1M IOPS × 4KB = 4 GB/s
- 1ms latency increase = 50% IOPS reduction
Storage Technology Updates
The storage landscape in the modern era looks very different from just a few years ago. NVMe/TCP and NVMe-oF have matured into mainstream enterprise protocols, while data centers are rapidly adopting 400G and 800G Ethernet to keep pace with escalating performance demands. At the cutting edge, technologies like Storage Class Memory (SCM) and computational storage are gaining traction, driven by AI and ML workloads that increasingly require sustained throughput well beyond 100 GB/s.
Everpure Performance Examples
Everpure systems are engineered for high-performance workloads, from transactional databases to large-scale AI pipelines. The latest FlashArray and FlashBlade models deliver exceptional throughput and IOPS, with FlashBlade//S pushing up to 75 GB/s and 15 million IOPS. But peak performance isn’t just about raw numbers—it also depends on efficient network design. With 25G per controller as a baseline and 100G aggregation increasingly standard for AI/ML, protocols like NVMe/TCP are gaining favor for their performance and efficiency, including significantly lower CPU overhead compared to iSCSI.
Data Reduction Impact on Bandwidth
Effective data reduction technologies don’t just save capacity—they also reduce bandwidth demands. Everpure systems leverage always-on features like deduplication, compression (often averaging 2:1), and zero-byte elimination to minimize the amount of data that actually moves across the wire. This has a direct impact on performance planning: a workload that would require 10 GB/s of raw bandwidth might only need 3.33 GB/s after a 3:1 data reduction. Understanding this multiplier is key to accurately sizing infrastructure for both throughput and efficiency.
Common Storage Bandwidth Calculations
Certain enterprise scenarios can place massive, short-term demands on storage infrastructure. Events like VM boot storms, large-scale database backups, and AI/ML training jobs all require careful bandwidth planning to avoid performance bottlenecks.
Here are some common bandwidth calculations:
VM Boot Storm
- 1,000 VMs × 50 MB/s = 50 GB/s peak
- Mitigation: Staggered boot, SAN boot optimizations
Database Backup Window
- 10TB ÷ 4h = 2.5 GB/s
- Overhead considered = ~25 Gbps minimum
AI/ML Workloads
- 100+ GB/s streaming
- Architecture: NVMe-oF + 100G network
Troubleshooting Storage Bandwidth
When performance lags, bandwidth is often the first suspect—but pinpointing the root cause requires digging into potential chokepoints across the stack. Here’s where to look and what to check, with practical diagnostic commands for monitoring network usage, storage queues, and raw bandwidth.
Common Bottlenecks
- Single 10G link = 1.2 GB/s max
- iSCSI CPU bottlenecks
- Switch oversubscription (e.g., 48×25G to 6×100G = 2:1)
Diagnostic Commands
# Network utilization sar -n DEV 1 # Storage queue monitoring iostat -x 1 # Bandwidth testing fio --name=bandwidth --rw=read --bs=1M --size=10G --direct=1
Conclusion
Understanding bits vs. bytes is fundamental for storage administrators designing high-performance infrastructures. Modern NVMe/TCP and NVMe-oF protocols demand careful bandwidth calculations to avoid bottlenecks.
Key takeaways for storage administrators:
- Protocol overhead reduces effective bandwidth by 10-20%
- Mixed workloads require both IOPS and bandwidth optimization
- Data reduction technologies can dramatically lower network requirements
- Binary vs. decimal conversions affect capacity planning
Everpure arrays optimize both bandwidth and IOPS through advanced NVMe architectures. FlashArray//X and FlashArray//C deliver consistent performance regardless of data reduction ratios, while FlashBlade scales bandwidth linearly with capacity.
The future belongs to storage-class memory and computational storage – technologies that will redefine the relationship between bandwidth, latency, and processing power. Everpure is pioneering these innovations with research into persistent memory integration and in-storage computing.
Ready to optimize your storage bandwidth? Contact our professional services for a comprehensive performance assessment and bandwidth optimization strategy.
FAQ
What is the difference between a megabit and a megabyte?
A megabit (Mb) is a unit of digital information typically used to measure data transfer rates, while a megabyte (MB) is used to measure file size or storage capacity. One megabyte equals eight megabits.
Why are internet speeds measured in Mbps instead of MBps?
Internet speeds are measured in megabits per second (Mbps) because bits are used to describe how quickly data moves across a network. Storage capacity and file sizes are measured in bytes.
How do you convert Mbps to MBps?
To convert megabits per second to megabytes per second, divide the number by eight. For example, 100 Mbps equals approximately 12.5 MBps.
What does the lowercase “b” versus uppercase “B” mean?
The lowercase “b” stands for bits, and the uppercase “B” stands for bytes. This difference changes the value by a factor of eight, so it is important to pay attention to capitalization.
Why does understanding bits versus bytes matter?
Understanding the difference helps you interpret internet speeds correctly and compare them to download speeds or file sizes, which are measured in different units.
Is Mbps only used for internet speed?
Mbps is commonly used for any type of data transfer rate, including network throughput, streaming performance, and bandwidth measurements.
How do gigabits and gigabytes relate to megabits and megabytes?
Gigabits and gigabytes follow the same relationship as megabits and megabytes. One gigabyte equals eight gigabits, and the capitalization still determines whether you are measuring bits or bytes.
Optimize Your Network Speeds
Take the next step in understanding network performance by learning how MTU size affects real-world throughput and bandwidth.






