Update 2011/01/31 – I’ve added new thoughts and comments on the subject here: http://www.unifiedcomputingblog.com/?p=234
Which is better? Which is faster?
I’ve been stuck on this one for a while. I’m traditionally a pure fibre channel kind of guy, so I’ve been pretty convinced that traditional FC was here to stay for a while, and that FCoE – as much as I believe in the technology – would probably be limited to the access and aggregation layers for the near term. That is, until it was pointed out to me the encoding mechanisms used by these two technologies and the effective data rates they allowed. I’m not sure why it never occurred to me before, but it hit me like the proverbial ton of bricks this week.
First off, a quick review of encoding mechanisms. Any time we’re transmitting or storing data, we encode it in some form or another. Generally, this is to include some type of a checksum to ensure that we can detect errors in reading or receiving the data. I remember the good old days when I discovered that RLL hard drive encoding was 50% more efficient than MFM encoding, and with just a new controller and a low level format, my old 10MB (yes, that’s ten whopping megabytes, kids. Ask your parents what a megabyte was.) suddenly became 15MB! Well, we’re about to embark on a similar discovery.
1, 2, 4, and 8 Gb Fibre Channel all use 8b/10b encoding. Meaning, 8 bits of data gets encoded into 10 bits of transmitted information – the two bits are used for data integrity. Well, if the link is 8Gb, how much do we actually get to use for data – given that 2 out of every 10 bits aren’t “user” data? FC link speeds are somewhat of an anomaly, given that they’re actually faster than the stated link speed would suggest. Original 1Gb FC is actually 1.0625Gb/s, and each generation has kept this standard and multiplied it. 8Gb FC would be 8×1.0625, or actual bandwidth of 8.5Gb/s. 8.5*.80 = 6.8. 6.8Gb of usable bandwidth on an 8Gb FC link.
10GE (and 10G FC, for that matter) uses 64b/66b encoding. For every 64 bits of data, only 2 bits are used for integrity checks. While theoretically this lowers the overall protection of the data, and increases the amount of data discarded in case of failure, that actual number of data units that are discarded due to failing serialization/deserialization is minuscule. For a 10Gb link using 64b/66b encoding, that leaves 96.96% of the bandwidth for user data, or 9.7Gb/s.
So 8Gb FC = 6.8Gb usable, while 10Gb Ethernet = 9.7Gb usable. Even if I was able to use all of the bandwidth available on an 8Gb FC port (which is very unlikely at the server access layer), with 10GE running FCoE, I’d still have room for 3 gigabit Ethernet-class “lanes”. How’s that for consolidation?
10Gb FC has the same usable bandwidth, and without the overhead (albeit a small 2% or so) of FCoE, but you don’t get the consolidation benefits of using the same physical link for your storage and traditional Ethernet traffic.
52 thoughts on “8Gb Fibre Channel or 10Gb Ethernet w/ FCoE?”
Hi, great info. Simple and clear. Thanks!
However, could you explain a little bit more about the formula’s you used to calculate the bandwidth?
Hi Marek –
Specifically which parts are you curious about?
I’m curious about the FC calculation (8.5*.80 = 6.8). Where the .80 value came from? And how about the 96.96% of the bandwidth using the 64b/66b encoding? Could you show the exact calcuation?
8b/10b encoding uses 10 bits to encode 8 bits of information. So for every 8 bits of “real” data, there are 10 bits of data sent across the wire, (8/10=.80) or a 20% encoding loss. In 64b/66b encoding, for every 64 bits of “real” data, there are 66 bits of data sent across the wire, (64/66=.96), or roughly 4% encoding loss.
8Gbit is alredy inclusive encoding. so your bandwidth is 8Gbit usable (and int;s 10Gbit spectrum really).
10Gbit Ethernet is 99.5% avaliable
BER is 10-18 for FC
and 10-15 for Ethernet –
so it means on FC you get one error every month where on ethernet one every day .
That;s the different .
hey…thats wrong. FC-100 the master constant for FC tech is 100MByte/sec, translating in 1.0625Gbps including 8b/10b encoding. The standard defined exact suplication of rates so Marek is right. 1.0625×8=5.5Gbps for 800MByte/sec
When speaking about “FC-10” this is different as they tried to matched for 10GigLAN, (which is 10.313Gbps, (64b/66b encoding). Here we have for FC-10 1200Mbyte/sec at 10.520Gbps phy.
I hope this helps. indeed its a confusion since they broke the norm with 10GFC. now is coming 16GFC.
Regarding the usage of 8GFC plus 3 GE…thats very relative to implementation…you see: the 8b/10b gearbox inside a SFP can be usable for both. still from PHY to upper L2 levels (FC-4 for FC and LLC for Eth) they are FULL different stuff, evenmore you need some way to deagregate the 3 GE to 1GE ports…sounds wear (here i really don t know what they do)
Okay, I get it 🙂 Thanks for the explanation.
Exactly what I was looking for. The mathematical difference between 8Gb FC & 10Gb FCoE
Greg – glad it was useful. Thanks for reading and commenting!
Aaah, the things one finds on the Internet with Google – thanks for a useful snippet of information to add to the “no I don’t want a bl*dy FC SAN in our new DC” project ;o)
Cheers – M.
Totally unsure about that. How likely is a bit fault? The more likely a bit fault is the better is a 8/10 encoding versus a 64/66 encoding. Plus, “fibrechannel over ethernet” sounds like the protocol overheads would add up. Not clear for me from your article.
Bit faults are very unlikely – hence the willingness to move to 64/66b encoding in both 10G Ethernet and 10G FC. Remember that the 8b/10b encoding used in 1GE is a very old method – and comes from a period in time when bit faults were more likely (especially over copper), and data rates were much lower. At higher data rates (like 10G), the encoding mechanism becomes much more important.
With FCoE, you’re adding about 4.3% overhead over a typical FC frame. A full FC frame is 2148 bytes, a maximum sized FCoE frame is 2240 bytes. So in other words, the overhead hit you take for FCoE is significantly less than the encoding mechanism used in 1/2/4/8G FC.
Could you point me to the sources of this information? I mean the one about the FC and FCoE encoding.
Do a quick Google search for “fibre channel encoding” and “10g Ethernet encoding”. As I recall, Wikipedia has some good summaries.
Thanks for the maths in the comparison. I too am (or was) more for real FC (guess it has to do with the old SCSI affinity 😉 ).
But something struck me reading your comment #10.
I may be mistaken, but is the max. frame size in ethernet not restricted to 15XX bytes?
So, even if you encapsulate FC in ethernet, you would need to send 2 frames in the worst case. Just taking this into account, and thereby figuring 2148 towards 1500, you’ll end up with roughly the same speed, given the 8bit encoding of FC and the 64bit of ethernet (the ‘overhead’ on encoding is balanced with the larger packet size – never mind other protocol effects like an increase in probability of collisions if 2 frames need to be sent, or ethernet having the random wait on resend, thereby breaking down in speed the heavier the line is used).2148/1522*6.8= roughly 10.
I must admit though, that I haven’t really looked into the depths of the FC mechanisms – but I recall that e.g. a 16Mb/s token ring being about as fast as a 100Mb/s ethernet, when the lines got plugged.
Am I totally off with the reasons I mentioned?
FCoE utilizes jumbo Ethernet frames in the neighborhood of 2300 bytes – this allows a full Fibre Channel frame to be delivered within a single Ethernet frame. Ethernet does not natively have any capacity to perform frame segmentation.
Also, in a switched Ethernet environment (we haven’t used shared media hubs/half duplex environments in a long time), there’s no longer a chance of collision, random waits on resend, etc. So the performance doesn’t suffer the way that “old” shared-media Ethernet behaves.
I think you have gotten things a bit confused. You said, “So 8Gb FC = 6.8Gb usable, while 10Gb Ethernet = 9.7Gb usable” which ignores the fact that the baud rate is specifically set to eliminate the impact of encoding on the “data rate”. To be clear, 1 Gb Fibre Channel with 8b/10b encoding at a baud rate of 1.0625Gb/s can deliver exactly 1 Gbps of data flow. The ANSI T11 standard setc the baud rate with full appreciation of encoding scheme used to ensure 1 Gbps of data flow for 1 Gbps Fibre Channel.
As the link rate increased to 2, 4 and 8 Gbps, the baud rate also increased proportionally so that you get exactly 2 Gbps, 4 Gbps and 8 Gbps of data flow at those link rates.
If your point was 8 Gbps FC can’t deliver 8 Gbps of data rate and 10 Gbps Ethernet supporting FCoE can deliver 10 Gbps of data rate, then you are incorrect.
If I’m incorrect, I’m absolutely open to being educated. What am I missing in the math? If a 1Gb FC link is actually 1.0625 Gb/s, I’m seeing a 6.25% allowance for encoding, while 8b/10b encoding incurs a 20% overhead. So 1.0625*.80 = .85 Gb/s. Since each FC generation increases proportionally, 8Gb FC would be .85 * 8 = 6.8 Gb/s. If there’s a flaw in my logic or my math, by all means help me understand it.
In fact, in doing some additional research to make sure I’m not totally smoking crack (it’s always a possibility) – I dug up this Wikipedia article listing bit rates for a massive number of interfaces : List of device bit rates. That page also shows the 6.8Gb/s figure for 8Gb FC, along with the other iterations of FC technology. I did find it a bit odd that the chart neglects to show 10Gb FC, but it uses the same encoding as 10Gb Ethernet, so the bit rate would be the same.
Really, the only downside to FCoE is the additional overhead of the FCoE/Ethernet headers, which takes a 2148 byte full FC frame and encapsulates it into a 2188 byte Ethernet frame – a little less than 2% overhead. So if we take 10Gb/s, knock down for the 64/66b encoding (10*.9696 = 9.69Gb/s), then take off another 2% for Ethernet headers (9.69 * .98), we’re at 9.49 Gb/s usable. Of course, all of this is predicated on full FC frames – if the majority of traffic is sub-maximum frame sizes, the overhead can climb. Your mileage and bandwidth may vary.
I’m wondering if part of the confusion might come from the traditional storage mentality (myself included) of thinking about storage bandwidth in terms of megabytes per second (MB/s) as opposed to traditional Ethernet mentality which thinks about bandwidth in terms of megabits or gigabits (Mb/s or Gb/s).
Given that there’s 8 bits to the byte, a network (regardless of type) capable of delivering 0.8 Gb/s (like 1Gb FC) is equal to about 100 MB/s. (Don’t believe me? Use Google calculator – type “0.8 Gb/s in MB/s” into Google and see what result you get…)
So 8Gb FC, with it’s resulting 6.8 Gb/s usable bandwidth, does in fact give you more than 800 MB/s.
The real issue is when we start trying to recalculate these on the fly, doing the “100MB/s is 1000Mb/s is 1Gb/s” (which is wrong) and getting the associated conversion error. It’s really “100MB/s is 800Mb/s is 0.8Gb/s”. So there’s your 20% overhead right there.
Just thinking out-loud here folks. 🙂
Okay, I found the correct math. It took a bit to rummage around in the “attic” I call my mind. Here is the reference, “Fibre Channel, Gigabit Communications and I/O for Computer Networks”, Alan F. Benner, McGraw Hill Series in Computer Communications, Copyright 1996., pg 24.
“… yielding an effective data transfer rate of:
2048 [PL] x 1 [byte]
1.0625 Gbps x —————— ————- = 100.369 MByte/sec ”
2168 [PL+OH] 10 [encode bits]
PL – Payload
OH – Frame overhead
As this shows, encoding as well as frame overhead are fully accounted for when computing the effective payload transmission rate in MBytes/sec.
I hope this is informative.
As you point out correctly, both the Fibre Channel and Ethernet use the same encoding, 64/66, at 10 Gbps.
To the subtitle of your post “Which is better, Which is faster”, the short answer is “It depends” on a lot of factors, not just the encoding scheme. IMHO, it is very hard to argue against a protocol that has had 15 years of optimization for a specific specialized task, as Fibre Channel has been optimized for storage, on the techical merits. It’s an entirely different thing to discuss “other factors” which are non-technical and how them may affect the customer decision of what is “best”.
Hmm … well that attempt to type out the formula was bolix. Here it is in a shorter form:
1.0625 x 2048 [PL] x 1 [byte] = 100.369 MByte/sec
2168 [PL+OH] 10 [encode bits]
Double HMMMM … it seems white space gets stripped in Word Press … One More Time.
1.0625 x ( 2048 [PL] / 2168 [PL+OH] ) x (1 [byte] / 10 [encode bits] ) = 100.369 MByte/sec
what is playload if ur r using .mxf files?
If the goal is 100MB/s, I would agree that it’s fully accounted for. The difference is that the conversation isn’t about 100MB/s, it’s about effective data rate. A theoretical 1.0625 Gb/s link using 64/66b encoding would provide 131MB/s. I agree that 8Gb FC provides 800MB/s (actually a bit more). But again, the encoding mechanism hinders the effective transfer rate.
That’s the beauty of FCoE – the protocol specific optimizations that come from the long-term development of Fibre Channel as a protocol are retained with FCoE. Only the “wire” is replaced, along with a new flow control mechanism that replaces buffer-to-buffer credits with something that approximates the same behavior. The protocol isn’t modified in any way.
I don’t disagree that Fibre Channel as a protocol is optimized for storage traffic – which is why I prefer it over other storage approaches (iSCSI, NFS, etc) – regardless of the wire it’s running over (FC or Ethernet).
Isn’t Brocade investing a lot of money in FCoE themselves?
I’m now throughly confused by your comments. Sorry. I guess it’s the advanced years and grey hair 🙂
So what do you mean by “effective data rate”? I know what I mean by it. Please see the result in the formula I provided which computes “effective data rate”, for 1 Gig Fibre Channel which is the maximum data rate you can flow on the wire. That’s what I meant.
You must appreciate that encoding schemes are critical choices at the physical layer of a standard. They in fact do not “… hinder the effective transfer rate.”, but are explicitly designed to achieve that rate and to do so cost-effectively. How?
An encoding scheme requires hardware to support it an it has to be cost-effective. Please note the date of my reference, 1996, which is at the beginning of the commercial application of Fibre Channel. At that time (15 years ago) 1 Gbps was FAST. The 8b/10b encoding method developed by IBM was efficient. In fact, it was so efficient that it continued to work at 8x the bandwidth and over 15 years. That’s outstanding.
At 10 Gbps, the Ethernet and Fibre Channel standards groups jointly selected 64b/66b encoding since 8b/10b could no longer meet the bit error rate required at that speed. Note, BOTH approved it. Why? Well, to reduce cost at the physical layer.
If you dig deeper, you will note that the Ethernet and Fibre Channel baud rates are not the same at 10 Gbps. Why? It has to do with the frame format.
Now, if you want to debate “efficiency” of Ethernet frames vs Fibre Channel frames for transporting strorage data, and support the contention that Ethernet is more efficient, good luck. You will first have to explain how a protocol (Ethernet) requiring an 8 byte header (pre-amble+SOF Header) is more efficient than one that requires only 4 bytes (Fiber Channel SOF primative).
Next, you will have to show why adding the FCOE shim layer between the Fibre Channel payload is more efficient than not adding it which is the the case with Fiber Channel. You are incorrect when you state “That’s the beauty of FCoE … Only the “wire” is replaced …” There is an added FCoE shim in the frame, and that’s not as efficient is it?
Now, to your final question, yes Brocade is investing in FCoE. Why? Because there are use cases and market segments that make economic sense for that investment. But not all use cases and market segments make sense for FCoE, which is why we still invest in pure Fibre Channel including our goal to deliver the first 16 Gbps Fibre Channel switch in the industry.
And, we are continue to invest heavily in Ethernet technology delivering the industry’s first Ethernet fabric with our new Brocade 6720 VDX family of data center switches which won the most important enterprise IT product of the year from CIOEdge.
I happened upon this CSCO Technical Brief published “back in the day” when Etherent at 1 Gbps was “new”.
Note the leverage of the ANSI T-11 work on 1 Gbps speeds and 8b/10b encoding for Fibre Channel at the phycical layer of the standard. Note also the different baud rates for Ethernet vs. Fibre Channel, due to frame differences, to achive the desired goal for “data rate”.
Hope you find that reference of some interest.
Not really sure how much more plain I can make it. You seem intent on focusing on “desired data rate” as opposed to actual performance. I’m not questioning your figures – you seem to be ignoring mine. Yes, 1.0625 Gb FC gives 100MB/s. Yes, 8.5Gb (1.0625 * 8) FC gives 800MB/s (or better). No one is disputing that. It doesn’t matter if we compare in MB/s or Gb/s, 8b/10b encoding will always be less efficient than 64b/66b encoding.
Since you’re so focused on MB/s, let’s do the numbers that way. 1.0625 Gb FC = 100MB/s. So that’s our baseline. 8.5 Gb FC = 870MB/s (by pure math). 10 Gb FCoE = 1214MB/s (by pure math). Now, let’s let’s normalize for link speed… 870/8 = 108 MB/s for each Gb/s of link speed – pretty close to what we started with. 1214 / 10 = 121 MB/s for each Gb/s of link speed. The disparity gets even larger if you use the “800 MB/s” value for 8Gb FC.
These numbers take into account the various protocol overheads, etc. I don’t really mind throwing a few extra bytes around in overhead and encapsulation when I’m still getting 13 MB/s more throughput for every 1 Gb of link speed – and bandwidth that’s flexible between protocols. That’s a full extra Gigabit Ethernet link there for “free”, and that’s even assuming that you’re maxing out the 8Gb FC bandwidth – a HUGE assumption at the access layer, which is where FCoE has the most dramatic and immediate payback. Instead of cabling by protocol, why on earth wouldn’t you cable for bandwidth with efficient encoding?
Your tone suggests that I’m anti-Fibre Channel, which couldn’t be further from the truth. I’ve been a long time supporter of FC and continue to be so. Obviously something had to give in encoding, which is why FC is finally adopting 64b/66b with 16Gb FC. The biggest reason NOT to change encoding is that it makes the ports incompatible with previous versions.
Once 16Gb hits the street, the FC argument changes quite a bit. Of course, with 40Gb Ethernet right around the corner, it doesn’t change for long.
Hey man, no offense nor disrespect was meant in my comments, nor was it my intent to accuse you of any particular bias vis a vis my “tone”. Folks who know me will tell you I’m blunt and decided tone deaf 🙂
Hope you have a Gr8 day my friend.
i was looking for this explanation.
simple and clear!
i have a question: what about iSCSI on 10GbE?
I think you can help me. When using the command ‘portperfshow’ on a Brocade FC switch, is it my assumption from reading your blog that on a 4 GB/s HBA port if I see 425 MB/s then that means I am saturating that port.
When using web tools on the brocade switch, it never displays more them 100MB/s throughput so my assumption is there’s a bottleneck somewhere. Since I’m a novice at this, I must ask. By the way the switch is not displaying any errors.
Any help would be appreciated.
@Max – in terms of this discussion, iSCSI over 10GE is just normal TCP traffic like any other.
@Robert – you’ve definitely got a bottleneck. 100MB/s should be a big clue, since that’s right at 1Gb/s speed. Sounds like something in the path, or one of the end devices, is operating at 1Gb.
Third I/O has published “An Analysis of 8 Gigabit Fibre Channel & 10 Gigabit iSCSI”; their analysis says 8Gb Fibre is still superior to iSCSI.
I looked at that Third I/O pdf, and I have to say it’s amazing what results one can cook up when in control of the ingredients.
Given the benefit of the doubt that the results are actually what was posted. The major flaw and question I have to ask is why in the world did they use Windows 2003 server in the mix at all? Its an 9 year old OS with a single threaded TCP/IP stack that was never designed to scale, and yes while patches and improvements have been made to that stack, it’s inherently flawed with its design focus of providing network services for slower and congested networks like the internet.
I would buy what they’re selling if they post some Linux or native Windows 2008 to Windows 2008 number on there.
Also, as an addendum to my last post, why did they use AMD CPU’s when their per core performance is poor when compared to Intel’s Westmere line?
Couple an AMD CPU and Windows 2003 single threaded TCP/IP stack and you will get inferior performance against a FC adapter every time. It’s the perfect recipe to build an inferior performing system.
Lastly, this conversation will be moot real soon. Based on road-map conversations I’ve had with OEM’s, All the major players are moving to a converged adapter that will do both FC and Ethernet all on the same card, all you have to do is enable the feature in software and carve out the bandwidth. All these purpose built cards and switches will be phased out in the next 5 years.
Dave is right about the overhead for FC, but there is some additional information necessary. FC focuses on MB/s of customer data. We designed FC to carry at 2k payloads, 100MB/s of customer data simultaneously in both directions even when using one ACK per frame (the additional data rate is used to carry the overhead of 8b/10b and the frame overhead). When we started FC in 1988, 64b/66b overhead was expensive to process vs the 8b/10b. The bit overhead is primarily for clock synchronization and secondarily for error detection. FC was also designed to use everything as a 40bit (in the 10b code) “Word” to facilitate hardware synchronization, detection, etc. A number of 40bit codes are used for flow control and other purposes (these are typically done with packets in Ethernet). So, there is some additional overhead. As an example, to make 10GE useful for FC, we required similar reliability as in FC. One of these is “no dropped packets”. To do this, 10GE uses the PAUSE flow control. So, one really needs to look beyond the bit rate to see what the actual performance is as measured by the customers data throughput.
Thanks for the additional insight, Horst!
Hi Dave- You have excellent knowledge of FC throughput, do you have any information on the more physical layer for fibre? Like what would be the luminescence tolerances for the differences between a 0 & 1 bit. And, how does the fibre channel keep timing synchronization between the sender and the receiver?
Hi Henry –
This is well beyond where my knowledge of FC stops. I’ll see if I can dig up some references for you.
IBM ran a test of 4Gb fiber channel vs 10Gb FCoE and they performed about the same. That means 8Gb fiber channel will perform about twice as fast as 10Gb FCoE:
Actually, what they proved was that for that particular workload and configuration, they got about the same performance. If you swapped out 8Gb FC for 4Gb FC in that same test, I would expect it to perform about the same as the 4Gb FC. If you look at the test, it shows that the transport really wasn’t that important for that workload. For what they wanted to prove (which is use whatever you’re comfortable with), they were quite successful. The issue isn’t which one is faster – it’s which one gives me the most usability AND flexibility. I’m a long time fan of traditional FC, but I’ve become a much bigger fan of flexibility.
This is an interesting topic of discussion. I know it’s old at this point but I’d like to know if you’ve returned to this topic now that 16Gbps FC is out and uses the 64/66 encoding?
“16Gbs” FC actually only uses a link speed of around 14Gbps. Their target is 1600MB/s, not 16Gbps. I find calling it 16Gb FC very misleading…
In respond to Kevin’s claim that a single 4Gb FC is comparable to a single 10Gb FCoE, he might want to read the whole article before coming to a conclusion, or misleading others.
The test is based on 8 X 4Gb FC = 32Gbps bandwidth vs 3 X 10Gb FCoE = 30Gbps bandwidth…… And FCoE is still faster despite the lower maximum bandwidth.
I have to agree with Kent here. That’s what I read also. In the real world I work in both of these environments, and when it comes to moving large volumes of data around, the FCoE has won hands down. That includes IBM, DELL and HP, I’ve been on all of them. IBM I must say has always been the slowest systems not matter what you put on them FC or FCoE. So I think the equation should also consider the source to be more accurate.
I’m sure I read somewhere that the TCP overhead meant that 10G iSCSI couldn’t match 8G fibre.
I’m not an expert and I’ve looked and can’t find the reference.
This fact did influence my decision to go with fibre when choosing a storage solution. (coupled with the lack of 10G infrastructure).
If my two choices were 8G FC or 10G iSCSI, I’d probably go with the 8G FC too – but that’s more of a protocol than a speed thing. Even with the overhead of TCP/IP, they’re probably similar in performance – at least good enough for most applications. I’d rather have the FC protocol to work with since I’m more comfortable with it from a troubleshooting and management perspective.
I have been reading this thread and have a dumb question? If the transfer rates of the disks in an array are limited to 3Gb/sec transfer rate once they go through a copper link (ref. EMC clariion 480 documentation). Doesn’t that seem to make a big dependency how the disk groups are laid out? Could you please help me understand this ? if there is are 3 disks concatenated in a mirrored Doesn’t that mean that the max data transfer rate of data will be limited to 3Gb/sec for a specific read/write operation to a server correct? Thanks for reading.
True, transport throughput is only one (and sometimes small) part of the equation. If you’re talking about a mirrored set, you’re right – you’d be limited on any one transaction to the speed of the disk – and that’s assuming that particular spindle isn’t serving any other requests at the same time.
Nice post, but I’d like to point out that the extra bits in 8b/10b and 64b/66b encoding are not there for data integrity. They are there for clock recovery and to balance the bitstream. They are used to keep the number of 0 and 1 bits approximately equal which is important for data transmission equipment, and to limit the number of successive 0 bits in a row so that clock recovery is easier in the hardware. These bits can be used to detect line errors, as certain code sequences are illegal, but they are not used in any type of error correction.