Seems to be very disappointing.
Another issue not yet mentioned is that testing occurs on single units. What we see in QST or in Sherwood's tests may not be typical across a population of tested units. Of course, that goes both ways: some tested units may perform significantly better - or worse than others. Until DDC, I would expect this level of variation across units but not with today's breed of SDR transceivers.
Frankly it seems to be a marketing vs technology issue.
The Icom target audience customer base tends to be relatively unsophisticated from the technological viewpoint. That's why you saw such rave reviews of the iIC-7300 when in fact it clearly was a substandard radio with many design defects. Comparing it to legacy radios of the ICOM past it was a major step forward and most of its target audience was on aware of the technological flaws.
The IC-7600 Will be a slightly better radio that the 7300 but the obvious live limitation of the 36 kHz DSP chain will still limit its features. I fully expect to see Rave review's for the 7600 from the technically unsophisticated who formed the ICOM target market while at the same time more sophisticated users of product such as flex Anan Will turn their noses up at the 7600.
Here’s an article by Andrew Barron ZL3DW on SDR testing. Note his comments on the Sherwood tests and the usefulness and applicability of tests used for traditional radios to testing of SDRs:
And some comments (including on Flex 6000 series and problems in SDR testing) by Rob Sherwood posted earlier by John / N0SNX:
I have my long form reports which cover operational issues, including how the radio performed in a contest. But of course the table on my web site is just numbers. As I have said at several ham presentations, we have become obsessed with wanting or owning a 100 dB radio. Of course bigger numbers are generally desirable, at least up to a point. Back when we had 70 dB radios, which was virtually every up-conversion radio made, the difference between a 70 dB radio and an 85 dB radio was huge. Now the question is, once we have a whole slew of 85 dB or better radios (close-in dynamic range), what else do we look at. Hopefully all sorts of things: clean receive audio with low fatigue, clean transmitter IMD, good ergonomics, stable software/firmware, reliability, warranty service, etc.
Also 85 dB is fine most of the time. The TS-990S tests out between 85 and 98 dB, depending on the band when measured at 2 kHz. This is because it happens to be reciprocal mixing dynamic range (RMDR) limited at 2 kHz on all bands. At 5 or 10 kHz the phase noise is much less of an issue. If the RMDR is 85 dB, and there is a really strong CW station 2 kHz away, the limit may actually be the key clicks of the very strong station 2 kHz above or below in frequency. On SSB the transmitted IMD is virtually always the limit when trying to copy an S3 signal with an S9+30 dB signal 3 KHz away.
Q: I wouldn't expect your test results to change based on the other software enhancements or fixes that I've seen on the road map.
Reply: There actually have been some software issues that affected basic measurements.
Q: Even Adaptive Predistortion (when they eventually add that to the 6000) wouldn't affect the receiver numbers. Does that seem right or am I missing something?
Reply: Certainly predistortion has nothing to do with basic receiver measurements.
Q: Do you think the way you rank the receivers in your Receiver Test Data listing will change to accommodate the SDRs? There are enough differences that you could make a case for that.
Reply: The problem is what is the dynamic range (DR3) of a direct sampling radio, both in the lab and on the air with real signals. If the 6000 series or the Apache ANAN series are tested in the lab, the DR3 value is very dependent on the test level. Unlike a legacy radio where it is super clean until it starts to overload, there is low level distortion in a direct sampling radio all the time. It may be odd order, or at times just some other spurious. Spurious free dynamic range should look at any near-by spur, not just third order. I never published any data on the SDR-1000 since the general spurious was way above the third-order spurious.
Q: At any rate I think many of us are anxious to see where the 6700 lands in your list. (Will it be #2 #5, etc)
Reply: That is the problem. If tested at lower levels, like we actually usually have to contend with on the air (S9 + 40 dB), the DR3 might be in the 80s. If tested at levels like S9 +60 dB or S9 + 70 dB, the DR3 may well be around 100 dB. As I said earlier, real QRM signals on the band provide incidental dither (a feature not in the 6000 series chip), and may well smear distortion products into broadband noise. How does one account for this in a table?
Here is my feeling on the subject, and this is a CW contest issue. Once the DR3 is 85 dB or better, we are going to be fine in a contest / DX pile-up MOST of the time. SSB contests / DX pile-ups are limited by the other guy’s transmitted IMD products, at least until we have a lot of class A rigs on the air, or a lot of rigs with really well implemented predistortion.
Lets take the TS-990S vs. the 6700. Both are in the 85 to 100 dB range, depending on how we measure the radio. After the radio is overload proof “good enough” from a real-world performance stand point, I am going to pick a radio to purchase on all those other very important aspects of what is important to me. I don’t happen to own a K3, yet 63% of the radios in the recent WRTC were K3s. Why was that? It works well, it is small and doesn’t weigh much, and again once the radio fulfilled the basic needs very well, it came down to the operator skills as to who won. (They all had the same antenna.)
Q: If the ARRL decides to rate a direct sampling radio vs. band noise, I don’t see any way to directly compare it to my table or the 40 years of history of published data by the ARRL.
Reply: Here is an example of the problem of any table sorted by close-in (2 kHz) DR3. That isn’t the whole picture, and I have never said it was. Take the Hilberling at the top of my table. It has the highest 2-kHz DR3, and it has outstandingly low phase noise (RMDR). But it doesn’t have QSK and its selectivity (300 Hz @ -6 dB narrowest selectivity) isn’t adequate in a DX pile-up. The next one on my table is the KX3. It has really high DR3 and its RMDR is outstanding. However, as the foot note clarifies, its opposite sideband rejection is only 65 dB.
Some hams go nuts over one number, such as a K3 owner asking me if he should sell his K3 and buy a KX3. That is a case of not seeing the forest for the trees.
Flex spent a fortune on making the 6700/6500 have a very high RMDR value, likely higher than practically necessary. Any OEM has to look at the BOM (build of materials) cost and decide where to allocate money to the radios subsystems.
I really liked the 6700 in the CQWW 160 CW contest in January. I also liked the TS-990S in CQWW SSB in October of 2013. Both radios are very different and have their own quirks. QSK was broken with FW 1.1 in January. The preamp gain of the Kenwood was way too high on 10 meters back in October, but has since then been improved.
Today if I am looking at a purchase, there are at least 10 radios that should be in my consideration list. The 6000 series would certainly be one of them. I once bought a $10,000 radio 10 years ago, and it went away after 5 months. It didn’t do enough better to warrant my investment in the radio, so I sold it and put up two more towers and yagis!
Final comment: Some of the numbers the League publishes I think are meaningless. What does a DR3 or blocking dynamic range mean if it is measured with a 1-Hz filter? Not much as far as I am concerned. Now we are going to have to come up with a meaningful way to measure direct sampling radios. Hopefully whatever the ARRL chooses has more relevance than what numbers one can get in the lab with a 1 Hz filter which has no resemblance to how we use a radio on the air.
73, Rob, NC0B
The bottom line is that I believe that the used FLEX-6300 Rob tested is most likely defective.
I have never seen a newly manufactured radio of any model in our 14 year history that measured in this IMD range. Below are the ARRL Lab measurements published in the April 2015 issue of QST for the FLEX-6300. The ARRL Lab blind purchased a 6700 and 6300 for their review. You can see that the ARRL's numbers for the 6300 are 10 dB higher than Rob's numbers even on 6m. Our typical lab measurements agree with the ARRL numbers for the FLEX-6300.
Out of courtesy, the ARRL gives every manufacturer the opportunity to review test data and comment if there are material discrepancies between the lab results and expected performance. This has been my experience with Rob in the past as well.
On March 6th, Rob sent me an email saying that he was seeing some "strange non-monotonic IMD data" in his measurements on the used FLEX-6300 he was testing. Rob provided no specific measurement data. He asked if I would like to have the unit sent back for evaluation. I said, yes.
On March 7th we provided a return label so that we could take a look at the problem. Rob did not ship the unit until March 14 and it just arrived this afternoon. I have not had a chance to look at the unit yet. Rob published the data without any opportunity to review the data or to test the radio to see if there is a component failure. There are a number of components in the signal chain that can be degraded in performance due to partial ESD damage for example. It would have been helpful to have the actual data to review and then to be able to verify that hardware is in proper working order since the radio was second hand.
I have not read this entire thread so there may be other comments are questions that I have not had time to address. I wanted to put the facts out in the public on this issue first.
I think it would be extremely useful to the amateur radio community if someone created a "Sherwood Simulator", a program that would allow the user to easily simulate different performance characteristics of the signal chain in various architectures and under various signal environments.
For example, it would be nice to be able to hear band noise and signals at an idyllic rural location with a perfect receiver, and then switch between simulations of *actual* receivers for the same signal environment. Maybe a station 400 Hz away with a 500 μV signal, and another 1 KHz away with a 50 mV signal, while trying to copy a 3.2 μV signal.
It would be very informative to be able to tinker with AGC parameters, receiver architectures, and signal environments to allow hams to easily determine which architectures and performance requirements matter most to their needs, and also to help dispel myths about one architecture vs another.
I'm not sure how accurate such a simulation would end up being, but I think it might help provide the average ham with a more intuitive foundation for RX performance metrics.
I disagree. It's Sherwood's prerogative to test any way he wants. He needs no permission from any of us to release his findings.
Like Sherwood, we can test and publish just as he does. However, few folks have the test equipment and wherewithal to perform these complex performance tests. He's testing based on what he deems important. We can accept his result - or reject it.
Sherwood clearly discloses the fact that the recently tested units were a second sample. I fail to see why the performance data should change whether the tested unit is new or used. Me, I would rather see a sample of used units after they have had an opportunity to age. I am less concerned about the performance of one new unit than a string of used units that have been in the field for a long time. I am also more concerned about variability of performance across multiple units whether or not testing occurs across new or used units. We should all be asking why there's a 12 dB close-in DR variance in Sherwood's recently tested 6700 on 10m. It's a question, not an attack.
I would like to see Sherwood and Flex come together with a unified answer. Was a small set-up detail missed in testing? Were the units faulty from production? Did performance degrade over time? These are the important matters -- not Sherwood's reputation, not his decision to test used equipment, and not whether he needed to obtain anyone's permission to publish his data.
It would really be nice if there was a SIMPLE explanation of each column using the column titles and not some other name that means the same thing, how the data in the column applies to the everyday operation of a transceiver (practical application), whether a high number or low number is better, and what range is totally acceptable (general use / contesting). Like the Dynamic range where anything over 85 is fluff for the average operator. I never new this and have learned a bit just reading the comments on this thread.
Here is a link to a great explanation of Reciprocal Mixing Dynamic Range (RMDR) by ARRL's Bob Allison. His explanation includes a real life example in a single sentence that puts it into context! Once you read this description, you can go back to the Sherwood's data and compare today's receivers with yesteryear's. It was a real eye opener for the old timers when I did a presentation for our local club. http://www.arrl.org/forum/topics/view/177
Sherwood only deals with radio specs. But what if he discussed things like, for SDRs, things like fit and finish of the radio, is the software easy to learn and use? Is the software well implemented, well writen? does it show the operator everything they need to know at any moment? And how about the user living with it day to day? And what is the customer support like?
So, to me specs are interesting, but have little to do with my actual enjoyment.
Once I optimized the settings, I found that it improved the performance of not only the FLEX-6300 Rob tested but materially improved another randomly selected unit from a different production lot. After optimization, both performed at roughly the same level (IMD DR3 in the high 90's), which is materially better than even the ARRL Lab measurements. The hardware was not touched in any way.
When Rob returns from his vacation, I will send him the unmodified FLEX-6300 he was testing. We will create a new software test release that has the optimized settings. He can upgrade/downgrade the software to compare before and after, which will demonstrate that the fix is purely in software. While I have not had time to test this exhaustively on the 6500/6700, a quick check leads me to believe that the optimized settings will improve IMD performance of all FLEX-6000 Series radios.
- 2888 Conversations
- 603 Followers
- 3419 Conversations
- 883 Followers
- 809 Conversations
- 146 Followers
- 432 Conversations
- 152 Followers
- 399 Conversations
- 139 Followers
- 363 Conversations
- 131 Followers
- 2814 Conversations
- 811 Followers