New Sherwood 6700 and 6300 tests

  • 2
  • Question
  • Updated 2 years ago
  • Answered
Have someone seen a new Sherwood test results for 6700 and 6300?
Seems to be very disappointing.
Photo of Gopro

Gopro

  • 30 Posts
  • 3 Reply Likes

Posted 2 years ago

  • 2
Photo of ka7gzr

ka7gzr

  • 218 Posts
  • 36 Reply Likes
I think there should be more than one specification attribute in these rankings including transmitter performance. These selected performance figures should be developed by the industry along with the users. Each of these performance figures would have a "weighting" factor applied. There could be further category rankings that include mode of operation e.g. cw, ssb, rag-chewing, dx, etc. 
Photo of Paul Christensen, W9AC

Paul Christensen, W9AC, Elmer

  • 323 Posts
  • 138 Reply Likes
96 dB of close-spaced DR is still excellent.  It would be helpful to have an explanation as to the recently tested 6700's 12 dB of DR degradation on 10m.  If this is normal by design, then the League should start publishing DR performance by band of operation in its equipment reviews. 

Another issue not yet mentioned is that testing occurs on single units.  What we see in QST or in Sherwood's tests may not be typical across a population of tested units.  Of course, that goes both ways: some tested units may perform significantly better - or worse than others.  Until DDC, I would expect this level of variation across units but not with today's breed of SDR transceivers.

Paul, W9AC

    
Photo of KY6LA - Howard

KY6LA - Howard, Elmer

  • 3735 Posts
  • 1599 Reply Likes
The ICOM IC-7300 and IC7610 are using the traditional 36KHz DSP chain found in all their older rasdio. Basically it seems to be a cost issue to save money and reutilize old stuff rather than investing in new technology. Icom also saved money by using DVI rather than paying a license fee for more modern HDMI

Frankly it seems to be a marketing vs technology issue.

The Icom target audience customer base tends to be relatively unsophisticated from the technological viewpoint. That's why you saw such rave reviews of the iIC-7300 when in fact it clearly was a substandard radio with many design defects. Comparing it to legacy radios of the ICOM past it was a major step forward and most of its target audience was on aware of the technological flaws.

The IC-7600 Will be a slightly better radio that the 7300 but the obvious live limitation of the 36 kHz DSP chain will still limit its features. I fully expect to see Rave review's for the 7600 from the technically unsophisticated who formed the ICOM target market while at the same time more sophisticated users of product such as flex Anan Will turn their noses up at the 7600.
Photo of Varistor

Varistor

  • 334 Posts
  • 74 Reply Likes
What a bunch of crap! If you think that Flex users are more sophisticated than other hams you are on something. Just take the time to read about the type of antennas being used here, the frequent complaining about RFI issues, numerous Windows issues, and OMG the discovery of remote switches. Just because people sit in front of giant screens and click on their waterfalls doesn't make them smarter than the rest.

This elites view is just rude.
Photo of John

John

  • 5 Posts
  • 0 Reply Likes
I Like My FLEX.Having said that Flex is lacking some in the CW operation.In this mode the Icom radio's are super.Perhaps Flex can work on this...
Photo of James Del Principe

James Del Principe

  • 304 Posts
  • 45 Reply Likes
I do like my Flex but my ancient FT 2000 had some nice features that would be good to have. Working split was very easy with the push of a single button I was on the 2nd VFO/RCVR. Press and hold, it went to 5 KHZ up, another press and it jumped to 10KHZ.   In SSB, if I hit the key I could send CW. There are two key jacks and each can be programmed individually. I had one set up as a straight key and the other as a 'bug'.   Yep, just like a real Vibroplex.   The negative was skirt selectivity.....very broad.
Photo of KY6LA - Howard

KY6LA - Howard, Elmer

  • 3735 Posts
  • 1599 Reply Likes
@N2WQ

If you actually read what I wrote. I made no comment on the sophistication of individual users.

It is a fact of life that the TARGET MARKET for the Icom SDR is the less technical appliance operators.
Photo of Duane, AC5AA

Duane, AC5AA

  • 447 Posts
  • 102 Reply Likes
John - examples of how ICOM beats the Flex on CW might be helpful.
Photo of John

John

  • 5 Posts
  • 0 Reply Likes
OK,Duane.Well to start,I can key my 7600 in full qsk.There is no delay and a beautiful cw note in the cw monitor.No delay.Now Duane,I will confess I am a fairly new Flex user.If you know of a better way Please let me know as I am really trying to give the Flex a real comparison.The filtering is equal and the Flex may beat the Icom on the NB.Icom has a antotune for peak cw.Not really used as is usually does not peak.I am learning.If you have suggestions as to help the Flex better on CW please let me know...73 john
Photo of Duane, AC5AA

Duane, AC5AA

  • 447 Posts
  • 102 Reply Likes
Hi John - up until the last two Beta releases, which I'm guessing is what you're running, the QSK CW was outstanding. Now there are a couple of issues like an AGC(?) pop on keying and not full QSK, in addition to the intermittent hesitation if you use CWX but I expect those will be fixed on the next standard release just because they were working fine prior. I know that's probably not a very satisfying answer.
(Edited)
Photo of John

John

  • 5 Posts
  • 0 Reply Likes
OK,Duane..I will back up a few releases and check that.Yes I am running the latest.Are you running direct frm the radio? not a winkeyer?  73 john
Photo of Duane, AC5AA

Duane, AC5AA

  • 447 Posts
  • 102 Reply Likes
Correct. I use either CWX for routine or (non-serious) contesting, or a paddle plugged into the front Key jack. Both ran fine until the two recent Beta's (if I remember right - I may have skipped one of the later releases, so I might be off by one release.) One very nice thing about CWX is that it tracks keyer speed with the plugged in paddle - something that a number of rigs with internal keyers don't do.
Photo of John

John

  • 5 Posts
  • 0 Reply Likes
OK,I am using a maestro.I don't have cwx on the maestro.I sent you a e-mail..lets chat..73 john
Photo of Ria - N2RJ

Ria - N2RJ, Elmer

  • 2311 Posts
  • 949 Reply Likes
Icom users do tend to be more traditionalists. Fewer contesters are using icom now but that is mainly due to the popularity of the Elecraft K3(S). However there are s good few power users such as K3LR and WB9Z/NV9L (who also have a flex).
Photo of Duane, AC5AA

Duane, AC5AA

  • 447 Posts
  • 102 Reply Likes
Hi John - I am not using a Maestro so I probably won't be able to help you on this. I use direct connection of the 6500 to my network, and SSDR from my shack desktop. I have no idea what Maestro configuration options are available, or how it works in QSK. You might want to check in with Dudley at Flex.
Photo of Andrew O'Brien

Andrew O'Brien

  • 384 Posts
  • 44 Reply Likes
I'm wondering whether ANY of the transceivers available in the market have any real difference when it comes to "average"  ham radio activity?  While I understand the reliability of test/bench measurements, I'd like to see a test involving a human ear, 3000 miles distance between transmit and receiving stations, and average antennae. Then step down the PEP of the transmit station  and compare transceivers , determining when the human ear involved can no longer hear the transmitting stations.   I'm going to guess that for SSB phone communications, the level at which the ear can no longer copy the transmitting station is going to be the same for 20-30 of the transceivers tested. For super weak CW or deep SNR digital modes ,  it might make a bigger difference but that difference would rarely involve ability to work a station of "importance" (needed DX entity). In fact, I'm willing to argue that it is only contesting stations that are impacted by any of this.  Like the station begging for contacts  with no takers yesterday and he could not hear my repeated attempts to contact him. 

The difference in receive performance between various radios DOES become important for SWL broadcast band DXing.  Pulling in that rare 90M Indonesian  station ID might just need that extra receive sensitivity .  For average ham activity I doubt the difference between a Flex 6700 and a Kenwood TS 440 would be noticed by ears (eyes, would notice a difference) 

Andy K3UK
Photo of Ria - N2RJ

Ria - N2RJ, Elmer

  • 2311 Posts
  • 949 Reply Likes
The SEI receiver test tables order them by dynamic range, narrow spaced. This is important really only in crowded band conditions. Anything above 80dB is just fine for everyday use, and actually for most contesters. 
Photo of KY6LA - Howard

KY6LA - Howard, Elmer

  • 3735 Posts
  • 1599 Reply Likes
@Andy

I have actually run many on air comparative receiving tests with many different brands and models of radios for a contest station

Sensitivity is not noticeable except wher phase noise is a factor

Dynamic Range is really important for a contest stationbut not so much for the Average Ham.

Phase Noise is definitely noticeable by the Average Ham The lower the phase noise the more even a weak SSB or CW signal jumps out of the noise
Photo of Michael Coslo

Michael Coslo

  • 947 Posts
  • 258 Reply Likes
Andrew - to answer your question, those tests purpose is exactly what this entire thread is about - quibbling over trivialities. 

Wanna know what I like the flex for? Not some 1 db difference over something, but the layout allows me to scope out entire bands, the interfacing to different programs to other software that I use is seamless. 

I wonder what the spec is for that? 


Photo of Ken - NM9P

Ken - NM9P, Elmer

  • 4188 Posts
  • 1333 Reply Likes
Yes...phase noise on both Receive AND Transmit are very important these days.

I can tell the difference in phase noise on two local KW hams - each less than a mile away.  One uses the Kenwood TS-850SAT that I sold to him after I got the 6500 and runs it into a TL922 amp and TH-11DX beam.  The other uses an Icom IC-735 into an AL80 and dipoles/verticals.  

The "foootprint" left by the IC-735 is MUCH wider on my panadapter, and the tailing edges of phase noise remnants raise the noise floor for a much wider bandwidth.

I can operate much closer in frequency to the stronger 850 station than I can the 735 station, because the phase noise transmitted by 735 is much worse, even on CW or when he is tuning up.

(BTW.  my friend with my old 850 is close enough and has enough antenna gain that he can hear me at S-9 when I am transmitting on 10 meters with only the output of my Transverter port when our antennas are pointed correctly! ) 
Photo of HCampbell  WB4IVF

HCampbell WB4IVF

  • 275 Posts
  • 84 Reply Likes

Here’s an article by Andrew Barron ZL3DW on SDR testing.  Note his comments on the Sherwood tests and the usefulness and applicability of tests used for traditional radios to testing of SDRs:

 https://www.google.com/search?q=Performance+testing+of+Software+Defined+Radios+By+Andrew+Barron+ZL3DW&ie=utf-8&oe=utf-8

And some comments (including on Flex 6000 series and problems in SDR testing) by Rob Sherwood posted earlier by John / N0SNX:

https://community.flexradio.com/flexradio/topics/arrl_and_sherwood_testing?topic-reply-list%5Bsettin...

I have my long form reports which cover operational issues, including how the radio performed in a contest. But of course the table on my web site is just numbers.  As I have said at several ham presentations, we have become obsessed with wanting or owning a 100 dB radio.  Of course bigger numbers are generally desirable, at least up to a point. Back when we had 70 dB radios, which was virtually every up-conversion radio made, the difference between a 70 dB radio and an 85 dB radio was huge.  Now the question is, once we have a whole slew of 85 dB or better radios (close-in dynamic range), what else do we look at.  Hopefully all sorts of things: clean receive audio with low fatigue, clean transmitter IMD, good ergonomics, stable software/firmware, reliability, warranty service, etc. 

  Also 85 dB is fine most of the time.  The TS-990S tests out between 85 and 98 dB, depending on the band when measured at 2 kHz.  This is because it happens to be reciprocal mixing dynamic range (RMDR) limited at 2 kHz on all bands.  At 5 or 10 kHz the phase noise is much less of an issue.  If the RMDR is 85 dB, and there is a really strong CW station 2 kHz away, the limit may actually be the key clicks of the very strong station 2 kHz above or below in frequency.  On SSB the transmitted IMD is virtually always the limit when trying to copy an S3 signal with an S9+30 dB signal 3 KHz away. 

Q:  I wouldn't expect your test results to change based on the other software enhancements or fixes that I've seen on the road map. 

Reply:  There actually have been some software issues that affected basic measurements. 

Q:  Even Adaptive Predistortion (when they eventually add that to the 6000) wouldn't affect the receiver numbers.  Does that seem right or am I missing something?

Reply:  Certainly predistortion has nothing to do with basic receiver measurements.  

Q:  Do you think  the way you rank the receivers in your Receiver Test Data listing will change to accommodate the SDRs?  There are enough differences  that you could make a case for that.  

Reply:  The problem is what is the dynamic range (DR3) of a direct sampling radio, both in the lab and on the air with real signals.  If the 6000 series or the Apache ANAN series are tested in the lab, the DR3 value is very dependent on the test level.  Unlike a legacy radio where it is super clean until it starts to overload, there is low level distortion in a direct sampling radio all the time.  It may be odd order, or at times just some other spurious.  Spurious free dynamic range should look at any near-by spur, not just third order.  I never published any data on the SDR-1000 since the general spurious was way above the third-order spurious. 

Q: At any rate I think many of us are anxious to see where the 6700 lands in your list.  (Will it be #2  #5, etc)

Reply:  That is the problem.  If tested at lower levels, like we actually usually have to contend with on the air (S9 + 40 dB), the DR3 might be in the 80s.   If tested at levels like S9 +60 dB or S9 + 70 dB, the DR3 may well be around 100 dB.  As I said earlier, real QRM signals on the band provide incidental dither (a feature not in the 6000 series chip), and may well smear distortion products into broadband noise.  How does one account for this in a table?

Here is my feeling on the subject, and this is a CW contest issue.  Once the DR3 is 85 dB or better, we are going to be fine in a contest / DX pile-up MOST of the time.  SSB contests / DX pile-ups are limited by the other guy’s transmitted IMD products, at least until we have a lot of class A rigs on the air, or a lot of rigs with really well implemented predistortion. 

Lets take the TS-990S vs. the 6700.  Both are in the 85 to 100 dB range, depending on how we measure the radio.  After the radio is overload proof “good enough” from a real-world performance stand point, I am going to pick a radio to purchase on all those other very important aspects of what is important to me.  I don’t happen to own a K3, yet 63% of the radios in the recent WRTC were K3s.   Why was that?  It works well, it is small and doesn’t weigh much, and again once the radio fulfilled the basic needs very well, it came down to the operator skills as to who won.  (They all had the same antenna.)

Q: If the ARRL decides to rate a direct sampling radio vs. band noise, I don’t see any way to directly compare it to my table or the 40 years of history of published data by the ARRL. 

Reply: Here is an example of the problem of any table sorted by close-in (2 kHz) DR3.   That isn’t the whole picture, and I have never said it was.  Take the Hilberling at the top of my table.  It has the highest 2-kHz DR3, and it has outstandingly low phase noise (RMDR).  But it doesn’t have QSK and its selectivity (300 Hz @ -6 dB narrowest selectivity) isn’t adequate in a DX pile-up.  The next one on my table is the KX3.  It has really high DR3 and its RMDR is outstanding.  However, as the foot note clarifies, its opposite sideband rejection is only 65 dB. 

Some hams go nuts over one number, such as  a K3 owner asking me if he should sell his K3 and buy a KX3.  That is a case of not seeing the forest for the trees.  

Flex spent a fortune on making the 6700/6500 have a very high RMDR value, likely higher than practically necessary.  Any OEM has to look at the BOM (build of materials) cost and decide where to allocate money to the radios subsystems. 

I really liked the 6700 in the CQWW 160 CW contest in January.  I also liked the TS-990S in CQWW SSB in October of 2013.  Both radios are very different and have their own quirks.  QSK was broken with FW 1.1 in January.  The preamp gain of the Kenwood was way too high on 10 meters back in October, but has since then been improved.  

Today if I am looking at a purchase, there are at least 10 radios that should be in my consideration list.  The 6000 series would certainly be one of them.  I once bought a $10,000 radio 10 years ago, and it went away after 5 months.  It didn’t do enough better to warrant my investment in the radio, so I sold it and put up two more towers and yagis!  

Final comment:   Some of the numbers the League publishes I think are meaningless.  What does a DR3 or blocking dynamic range mean if it is measured with a 1-Hz filter?  Not much as far as I am concerned.  Now we are going to have to come up with a meaningful way to measure direct sampling radios.  Hopefully whatever the ARRL chooses has more relevance than what numbers one can get in the lab with a 1 Hz filter which has no resemblance to how we use a radio on the air. 

 73, Rob, NC0B    

Howard

(Edited)
Photo of Gerald - K5SDR

Gerald - K5SDR, Employee

  • 830 Posts
  • 1514 Reply Likes
I am sorry to be so late to this thread but I was on a 4 day trip to a hamfest and customer meetings in Canada.

The bottom line is that I believe that the used FLEX-6300 Rob tested is most likely defective.  

I have never seen a newly manufactured radio of any model in our 14 year history that measured in this IMD range.  Below are the ARRL Lab measurements published in the April 2015 issue of QST for the FLEX-6300.  The ARRL Lab blind purchased a 6700 and 6300 for their review.  You can see that the ARRL's numbers for the 6300 are 10 dB higher than Rob's numbers even on 6m.  Our typical lab measurements agree with the ARRL numbers for the FLEX-6300.



Out of courtesy, the ARRL gives every manufacturer the opportunity to review test data and comment if there are material discrepancies between the lab results and expected performance. This has been my experience with Rob in the past as well.  

On March 6th, Rob sent me an email saying that he was seeing some "strange non-monotonic IMD data" in his measurements on the used FLEX-6300 he was testing.  Rob provided no specific measurement data.  He asked if I would like to have the unit sent back for evaluation.  I said, yes. 

On March 7th we provided a return label so that we could take a look at the problem.  Rob did not ship the unit until March 14 and it just arrived this afternoon.  I have not had a chance to look at the unit yet.  Rob published the data without any opportunity to review the data or to test the radio to see if there is a component failure.  There are a number of components in the signal chain that can be degraded in performance due to partial ESD damage for example.  It would have been helpful to have the actual data to review and then to be able to verify that hardware is in proper working order since the radio was second hand.

I have not read this entire thread so there may be other comments are questions that I have not had time to address.  I wanted to put the facts out in the public on this issue first.

73,
Gerald
Photo of Jd Dupuy

Jd Dupuy

  • 155 Posts
  • 60 Reply Likes
Well this is encouraging news! Hope you had a good trip.
Photo of Bill W2PKY

Bill W2PKY

  • 503 Posts
  • 85 Reply Likes
There is an interest to learn why there was a second test on just two radios on the 10M band only??
Photo of Ken - NM9P

Ken - NM9P, Elmer

  • 4180 Posts
  • 1332 Reply Likes
I was thinking that such a sudden demotion of the 6300 must be the result of some kind of anomaly in the rig.  It isn't THAT far behind the 6700/6500!

I would assume that once you get this cleared up, Rob will amend the report?
Photo of Lawrence Gray

Lawrence Gray

  • 158 Posts
  • 80 Reply Likes
Why would anyone test a used radio and publish questionable results without contacting the manufacturer?

Puts the entire testing program in question.  I ran an electronics manufacturing company for many years--this is certainly not the way high quality/accuracy testing is performed.  Very strange.

Larry, W1IZZ
Photo of Ria - N2RJ

Ria - N2RJ, Elmer

  • 2311 Posts
  • 949 Reply Likes
Thank you Gerald. The other concern is the 10 meter performance of the 6700.
(Edited)
Photo of Al / NN4ZZ

Al / NN4ZZ

  • 1845 Posts
  • 669 Reply Likes
Hi Gerald,
Rob mentioned that he is testing another 6300 this week.  So that will be another data point as well as what you find on the radio he sent in for review. 

There is also a question about whether any of the firmware updates could have affected how the results.  The numbers with the preamp ON vs OFF have changed since his earlier tests.   Do you or Steve have any thoughts on that possibility?

Regards, Al / NN4ZZ  
al (at) nn4zz (dot) com
SSDR / DAX / CAT/ 6700 -  V 1.10.16
Win10
 
Photo of Gerald - K5SDR

Gerald - K5SDR, Employee

  • 830 Posts
  • 1514 Reply Likes
Al,
This is all new information to me without any time to analyze.  We have been building the 6500/6700 for four years and the 6300 for three years and this is the first time I have heard anything of this.  Everything would be speculation without analysis.  
Gerald
Photo of Peter K1PGV

Peter K1PGV, Elmer

  • 544 Posts
  • 321 Reply Likes
Mr. Gray (above) said it best, I think:

"Why would anyone test a used radio and publish questionable results without contacting the manufacturer?"

It feels like there's a piece of the puzzle missing. Mr. Sherwood is generally known for the quality (rigor and reliability) of his testing. Testing a used radio, getting unusual results, and then simply publishing the results as being representative of that model radio is very clearly not best engineering practice.

Something's not rigGht. Me thinks there must be more to the story.

Peter
K1PGV
Photo of Al / NN4ZZ

Al / NN4ZZ

  • 1845 Posts
  • 669 Reply Likes
An update from Rob -- 

I borrowed a local 6300 from one of the Boulder Amateur Radio Club members, as we had a meeting tonight, and ran it though the lab this evening.  This second sample measures 3 dB better than my January measurements on the “used” 6300 provided by Gerald.  Something degraded when I retested the Flex-provided 6300 in March.  Gerald will evaluate what is wrong with the “used” 6300 in the coming days. 

 My website has been updated as of Tuesday night.  There is more variation than I would like to see, considering the sample from Flex, the one Adam tested in 2015, and the one I borrowed tonight.  


Al's comment -- As noted earlier, Rob is known for his quality and impartial testing. We should wait to see what Gerald and FRS find out about the radio Rob returned to help get to the root cause of the variability.  The same radio tested differently over time, and also different from the one Adam Farson tested.  Is it component variation, component degradation,  a firmware difference, or something else. 


Regards, Al / NN4ZZ  

al (at) nn4zz (dot) com

SSDR / DAX / CAT/ 6700 -  V 1.10.16

Win10 


Photo of Gerald - K5SDR

Gerald - K5SDR, Employee

  • 830 Posts
  • 1514 Reply Likes
I would like to state that I personally trust Rob Sherwood's integrity.  I believe him to be an honest and sincere person.  While I may not always agree with Rob on every subject, I respect him a great deal.  Any question of his integrity on this forum is not appropriate.  With regard to this topic, we need some more time to investigate using an engineering approach rather than anecdotal information.  

Some of you have heard me give a talk at various ham events titled, "Grokking Receiver Performance."  One of my slides shows Rob's receiver performance web page with the 6700 at the top of the chart.  I tell the audience that since we are at the top, I can say that the chart has reached the point where it is meaningless and the wrong focus.  It has outlived its original purpose.  Above about 90 dB or so of IMD DR3  it doesn't really matter.  At 100 dB a dB or so more is purely ego and has virtually no impact in the real world.  It is called the law of diminishing returns.  

To quote Rob,
Once the DR3 is 85 dB or better, we are going to be fine in a contest / DX pile-up MOST of the time. 
Photo of Lawrence Gray

Lawrence Gray

  • 158 Posts
  • 80 Reply Likes
I do not understand testing used equipment in unknown condition, obtaining questionable results, and publishing those results.   Then obtaining another used piece of equipment in unknown condition and testing it to verify the results.   

If the testing is to mean anything at all (which is highly questionable), all testing should be done with new equipment.  Unusual results should never be published until verified and there is a discussion with the manufacturer.

I'm sure that the person doing the testing is honest.  However, testing used equipment in unknown condition is not an appropriate testing methodology.  Publishing unusual results before verification and discussion is also not an appropriate testing methodology.

The Sherwood results make no difference to me.  I didn't purchase Flex equipment because they were number 1 or number 12--it doesn't really matter in the real world.  I purchased Flex because I can see the whole band.  I can have 4 slices open simultaneously, running Skimmer on each one......  I'm not going to run out and buy a new rig because it is now #1 on the Sherwood chart, particularly given this example of the testing methodology.

Larry, W1IZZ
Photo of Duane, AC5AA

Duane, AC5AA

  • 447 Posts
  • 102 Reply Likes
Hi Gerald - was this one of the user settings available on the SSDR panel, or was it an "inside the code" setting that users can't adjust?
Photo of Gerald - K5SDR

Gerald - K5SDR, Employee

  • 830 Posts
  • 1514 Reply Likes
It's hard coded in the radio code.  There is no reason for the user to adjust because it requires lab equipment to verify the settings.  There are multiple registers that interact so you have to know what you are doing.
Photo of James Del Principe

James Del Principe

  • 304 Posts
  • 45 Reply Likes
Gerald, could this result in a future SW release for all of us to benefit?   I have a 6500 running 1.10.16 non-beta. Is this a candidate?     73, Jim
Photo of Tim - W4TME

Tim - W4TME, Customer Experience Manager

  • 9186 Posts
  • 3541 Reply Likes
Yes, it is possible that a future software release will contain logic changes to benefit all FLEX-6000 owners once we have had an opportunity to fully vet the change. 
Photo of Matt NQ6N

Matt NQ6N

  • 109 Posts
  • 44 Reply Likes
I'd like to make a suggestion to any of the engineers on the list who have a deep understanding of the nature of the tests that Rob performs on the receivers: 

I think it would be extremely useful to the amateur radio community if someone created a "Sherwood Simulator", a program that would allow the user to easily simulate different performance characteristics of the signal chain in various architectures and under various signal environments. 

For example, it would be nice to be able to hear band noise and signals at an idyllic rural location with a perfect receiver, and then switch between simulations of *actual* receivers for the same signal environment.  Maybe a station 400 Hz away with a 500 μV signal, and another 1 KHz away with a 50 mV signal, while trying to copy a 3.2 μV signal.

It would be very informative to be able to tinker with AGC parameters, receiver architectures, and signal environments to allow hams to easily determine which architectures and performance requirements matter most to their needs, and also to help dispel myths about one architecture vs another.  

I'm not sure how accurate such a simulation would end up being, but I think it might help provide the average ham with a more intuitive foundation for RX performance metrics. 

73,
Matt NQ6N
Photo of Andrew O'Brien

Andrew O'Brien

  • 384 Posts
  • 44 Reply Likes
I read elsewhere  that Sherwood did a test today  using another 6300 and could not replicate the findings he published 3 days ago.
Photo of Al / NN4ZZ

Al / NN4ZZ

  • 1846 Posts
  • 670 Reply Likes
Hi Andrew,
Was that the test I mentioned above in the "update from Rob."    He ran it last night (Tuesday 21-Mar) and posted the results last night. (or very early this morning).    

Or do you think there was another test run today (Wednesday)?  

As mentioned above he is seeing some variability in the results.   

Regards, Al / NN4ZZ  
al (at) nn4zz (dot) com
SSDR / DAX / CAT/ 6700 -  V 1.10.16
Win10
  
Photo of Andrew O'Brien

Andrew O'Brien

  • 384 Posts
  • 44 Reply Likes
It was the test based on a 6300 that a friend loaned him.  I may have assumed it was today but could have been Tuesday.
Photo of Paul Christensen, W9AC

Paul Christensen, W9AC, Elmer

  • 323 Posts
  • 138 Reply Likes
>"I do not understand testing used equipment in unknown condition, obtaining questionable results, and publishing those results.   Then obtaining another used piece of equipment in unknown condition and testing it to verify the results. If the testing is to mean anything at all (which is highly questionable), all testing should be done with new equipment.  Unusual results should never be published until verified and there is a discussion with the manufacturer.  I'm sure that the person doing the testing is honest.  However, testing used equipment in unknown condition is not an appropriate testing methodology.  Publishing unusual results before verification and discussion is also not an appropriate testing methodology."

I disagree.  It's Sherwood's prerogative to test any way he wants.  He needs no permission from any of us to release his findings. 

Like Sherwood, we can test and publish just as he does.  However, few folks have the test equipment and wherewithal to perform these complex performance tests.  He's testing based on what he deems important.  We can accept his result - or reject it. 

Sherwood clearly discloses the fact that the recently tested units were a second sample.  I fail to see why the performance data should change whether the tested unit is new or used.  Me, I would rather see a sample of used units after they have had an opportunity to age.  I am less concerned about the performance of one new unit than a string of used units that have been in the field for a long time.  I am also more concerned about variability of performance across multiple units whether or not testing occurs across new or used units.  We should all be asking why there's a 12 dB close-in DR variance in Sherwood's recently tested 6700 on 10m.  It's a question, not an attack.

I would like to see Sherwood and Flex come together with a unified answer.  Was a small set-up detail missed in testing? Were the units faulty from production?  Did performance degrade over time?  These are the important matters -- not Sherwood's reputation, not his decision to test used equipment, and not whether he needed to obtain anyone's permission to publish his data. 

Paul, W9AC
Photo of Ned K1NJ

Ned K1NJ

  • 313 Posts
  • 80 Reply Likes
     The mere fact that SDRs have come into the picture complicates the process.
One might possibly expect radios ( a "new" radio with each release) to actually
improve. It would be nice to see that happen and be verified by a highly regarded
outside source. Do not shoot the messenger.  Test and verify.

Ned,  K1NJ
Photo of Gopro

Gopro

  • 30 Posts
  • 3 Reply Likes
Paul,
you're absolutely right!
Photo of Norm - W7CK

Norm - W7CK

  • 754 Posts
  • 160 Reply Likes
I feel pretty pretty ignorant when it comes to trying to get a through understanding of what each of the columns really mean and I'm sure I'm not the only one.  Until I read through the comments here, I had no idea what the "Dynamic Range Narrow Spaced (dB) column really meant and I had no idea that anything over about 85 was pretty much undetectable by the human ears.  That type of information is invaluable.

It would really be nice if there was a SIMPLE explanation of each column using the column titles and not some other name that means the same thing, how the data in the column applies to the everyday operation of a transceiver (practical application),  whether a high number or low number is better, and what range is totally acceptable (general use / contesting).   Like the Dynamic range where anything over 85 is fluff for the average operator.   I never new this and have learned a bit just reading the comments on this thread.
Photo of Rick

Rick

  • 160 Posts
  • 21 Reply Likes
I agree with you and I suspect a high percentage of operators look at those charts from the standpoint of whose in 1st place, 2nd place etc. I too would like a less technical explanation of the numbers in those tables. I understand a few maybe, but I'd like to know more.
Photo of Ria - N2RJ

Ria - N2RJ, Elmer

  • 2311 Posts
  • 949 Reply Likes
Noise Floor - lowest noise level on the radio (includes receiver noise). This is basically the baseline noise level when the rig is not connected to any signal source (antenna or signal generator). Lower is better meaning that the receiver noise would not cover up weak signals.

AGC threshold - The lowest signal level at which AGC activates and compresses the signal toward the target (measured in μV and dB). Lower is better. AGC being able to engage on weaker signals is better. 

100kHz Blocking - 

Sensitivity - basically same as #1, but tells you what is the lowest level signal you can hear.

LO noise - local oscillator noise in dBc per Hz.
Spacing - what it is measured at. 10kHz is common but Rob tests at 50 for most rigs as well. 

Filter Ultimate dB - measurement of strong signals leaking through the band stop of the filter

Dynamic Range Wide - difference between highest and lowest signals over a wide range of frequencies (20kHz usually)
Dynamic Range Narrow - difference between highest and lowest signals over a narrow range of frequencies (2kHz).

For the last two, higher is better, because more dynamic range means that the rig can receive both strong and weak signals at the same time.

The last one is used to rank the receivers and is basically a measurement of how it will perform in crowded band conditions. It is important if you're trying to dig out weak signals when strong signals are nearby 

All of this really boils down to two things you do with a receiver:

Receive weak signals, period
and
Receive weak signals in crowded band conditions (contest and DX)

Any receiver really can pick out a weak signal if the band is quiet and there's no one else. However if you have many strong signals, they will drown out the weak ones, even if they're not on the same frequency. This is where dynamic range comes in. 

For rag chewing or working strong signals it matters little. 

Ria
Photo of Rick

Rick

  • 160 Posts
  • 21 Reply Likes
Thank you very much Ria! I will print this out and keep it handy.
73
Rick, W2JAZ
Photo of Al / NN4ZZ

Al / NN4ZZ

  • 1845 Posts
  • 669 Reply Likes
Rob has a detailed explanation for his table and some background why he sorts it on 2 kHz dynamic range.  It is available in DOC or PDF format and the links are at the top of the table.  Also here is a link to the PDF version. 

http://www.sherweng.com/documents/Terms%20Explained%20for%20the%20Sherwood%20Table%20of%20Receiver%2...

Regards, Al / NN4ZZ  
al (at) nn4zz (dot) com
SSDR / DAX / CAT/ 6700 -  V 1.10.16
Win10
Photo of Ria - N2RJ

Ria - N2RJ, Elmer

  • 2311 Posts
  • 949 Reply Likes
Hadn't noticed that.

I did forget 100dB BDR which is basically the difference between the minimum discernable signal and a signal off frequency that causes 1dB gain compression. This essentially is another factor in weak signals. 
Photo of KY6LA - Howard

KY6LA - Howard, Elmer

  • 3729 Posts
  • 1579 Reply Likes

Here is a great TUTORIAL on Youtube where you can see what a lot of these measures are on a spectrum analyzer

What is a dB, dBm, dBu, dBc, etc. on a Spectrum Analyzer?

https://www.youtube.com/watch?v=1mulRI-EZ80

Photo of Dave - WB5NHL

Dave - WB5NHL

  • 284 Posts
  • 63 Reply Likes
Norm;
Here is a link to a great explanation of Reciprocal Mixing Dynamic Range (RMDR) by ARRL's Bob Allison. His explanation includes a real life example in a single sentence that puts it into context! Once you read this description, you can go back to the Sherwood's data and compare today's receivers with yesteryear's. It was a real eye opener for the old timers when I did a presentation for our local club.  http://www.arrl.org/forum/topics/view/177
Photo of Bill -VA3WTB

Bill -VA3WTB

  • 3787 Posts
  • 913 Reply Likes
Many are so into specs, as though they mean you should or not buy a certain radio. The specs only tell a small part of what makes a radio a really good radio. Consider a car magazine, say Car and Driver. They publish the car specs, like 0 to 60 in seconds, 1/4 mile. But they also talk about ride, handling, driver comfort, and controls. Visibility and options. Other than specs on our radios, what makes it stand out from others as a user? See, the specs do not tell the real tangibles.

Sherwood only deals with radio specs. But what if he discussed things like, for SDRs, things like fit and finish of the radio, is the software easy to learn and use? Is the software well implemented, well writen? does it show the operator everything they need to know at any moment? And how about the user living with it day to day? And what is the customer support like?

So, to me specs are interesting, but have little to do with my actual enjoyment.
(Edited)
Photo of Ross - K9COX

Ross - K9COX

  • 345 Posts
  • 107 Reply Likes
Perhaps the tubes were weak on the 6300.
(Edited)
Photo of Lee - N2LEE

Lee - N2LEE

  • 296 Posts
  • 152 Reply Likes
That's why I rotate and polish the tubes every month.
Photo of Norm - W7CK

Norm - W7CK

  • 754 Posts
  • 160 Reply Likes
Thank you all very much for the explanations and links.  That really helps!
Photo of Gerald - K5SDR

Gerald - K5SDR, Employee

  • 830 Posts
  • 1514 Reply Likes
Official Response
I have good news on my testing of the radio Rob Sherwood was evaluating.  What I thought might be a hardware problem turned out to be sub optimal software settings.  I am glad that Rob noticed the performance difference because it triggered deeper analysis.

Once I optimized the settings, I found that it improved the performance of not only the FLEX-6300 Rob tested but materially improved another randomly selected unit from a different production lot.  After optimization, both performed at roughly the same level (IMD DR3 in the high 90's), which is materially better than even the ARRL Lab measurements.  The hardware was not touched in any way.  

When Rob returns from his vacation, I will send him the unmodified FLEX-6300 he was testing.  We will create a new software test release that has the optimized settings.  He can upgrade/downgrade the software to compare before and after, which will demonstrate that the fix is purely in software.  While I have not had time to test this exhaustively on the 6500/6700, a quick check leads me to believe that the optimized settings will improve IMD performance of all FLEX-6000 Series radios.  

73,
Gerald
Photo of K1UO - Larry

K1UO - Larry

  • 842 Posts
  • 135 Reply Likes
Does this software fix have a tracking number?
73,
Photo of Ken - NM9P

Ken - NM9P, Elmer

  • 4172 Posts
  • 1331 Reply Likes
Excellent! My 6500 with even better IMD performance than before!
Photo of Al / NN4ZZ

Al / NN4ZZ

  • 1839 Posts
  • 660 Reply Likes
Gerald,
Great news.   Was the firmware change that affected the performance related to a specific enhancement or bug fix in one of the previous releases?  

I suggested that Rob may want to start adding the software version to the footnotes in his table and I think he is going to do that.   Especially with SDRs I think this is a good idea.     

As was noted earlier, the difference is probably not noticeable to the users in most cases but it is nice to get the best performance possible and it's also nice stay at the top of the pile in Rob's table.

Regards, Al / NN4ZZ  
al (at) nn4zz (dot) com
SSDR / DAX / CAT/ 6700 -  V 1.10.16
Win10
Photo of Ria - N2RJ

Ria - N2RJ, Elmer

  • 2310 Posts
  • 948 Reply Likes
Or just put it next to the radio name and date. 

Eg. instead of FlexRadio Systems 
6700 
Hardware Updated

Something like 
FlexRadio Systems 
6700 
H/W Updated , S/W v1.10.16
Photo of Al / NN4ZZ

Al / NN4ZZ

  • 1839 Posts
  • 660 Reply Likes
Ria,
Good idea, I like that even better....

Regards, Al / NN4ZZ  
al (at) nn4zz (dot) com
SSDR / DAX / CAT/ 6700 -  V 1.10.16
Win10
Photo of Rick

Rick

  • 160 Posts
  • 21 Reply Likes
This is good news, I hope, as last night I ran what would be termed by an engineer a crude experiment using the radios in my shack at present - Flex 6300 and 5000A, K3 (with some S model upgrades), KX3, and Icom 9100. I recorded noise floors using the S meters on 80, 40, and 30 meters, ensuring that filters, preamps etc. were set to be comparable on all 5 radios. I found the noise level on my 6300 to be 2-3 S units higher on 40 and 30 than any of the other radios, even the somewhat long in the tooth 9100. The Icom however did not handle the peak static crashes as well as any of the other radios especially on 80 and 40m. I hope this software fix Gerald has mentioned improves this situation. 
Rick, W2JAZ
Photo of Walt

Walt

  • 236 Posts
  • 75 Reply Likes
And I hope this type of exhaustive testing is added to the alpha testing process so with every future software release, the user has confidence that there has not been any adverse changes in performance.

Cheers
Photo of Ria - N2RJ

Ria - N2RJ, Elmer

  • 2310 Posts
  • 948 Reply Likes
We do test new versions exhaustively, but not everyone has access to Rob Sherwood's equipment and expertise, and Flex's lab only has so many (human) resources. So we're not going to catch every last little thing but performance tweaks and deficiencies are definitely noticed in real world conditions. So this essentially means that something which doesn't affect real world performance won't be noticed by most Alpha testers. The good news is that it won't be noticed by anyone else, either. 

Ria, N2RJ
Alpha team
Photo of Steve Gw0gei

Steve Gw0gei

  • 193 Posts
  • 50 Reply Likes
And I for one certainly appreciate the efforts of. The alpha team in trying to ensure that the end product released to users is as near perfect as possible. For those of us that regularly contest with a flex 6000 radio that's really important and has given me the confidence, after a few cycles , to upgrade within hours of a new release rather than delaying trying it for fear of bugs impacting on one of my series contest results. This reliability helped me gain another hf championship win in rsgb 2016 series and 2017 series is going well too so far. Looking forward to some better close in performances from the next release now. Keep up the good work.
73 Steve gw0gei / gw9j
Photo of Walt

Walt

  • 236 Posts
  • 75 Reply Likes
I am sorry that I assumed the alpha testing was the factory testing.  I would never expect anyone in the field to have the same test equipment.

So let me re-phrase that to mean factory-testing.  I expect the factory to make sure that every software release does not reduce the performance of the radio.  I purchased the radio based on specifications and third-party performance testing and I would like the radio to maintain that during its life-cycle.

My 2 cents - I am heading to the field tomorrow so no more chatter from me on the topic.

Cheers and two pints, please . .
Photo of Ria - N2RJ

Ria - N2RJ, Elmer

  • 2310 Posts
  • 948 Reply Likes
Hi Walt,

No need to apologize. But now you know that the Alpha team is made up of real world users. We have power users and regular users so we have a good cross-section. The Flex development team, Tim, Gerald and the rest of the team are very attentive to any issues we may report. We also read the community postings and try to reproduce bugs that are reported here. There is a lot of hard work going on behind the scenes (most of it by the dev team). We are not employees but we are enthusiasts who have a deep interest in the success of the product. 

Gerald taking a close look at the Sherwood results as well our discussions in the thread and communicating directly with all parties involved shows that he also cares about the performance of the radio very much, as I am sure you do. I am confident that the new fixes discussed by him will bring the radio up to the top notch spec that Flex users have grown accustomed to.

Ria, N2RJ
Alpha team
Photo of Gerald - K5SDR

Gerald - K5SDR, Employee

  • 830 Posts
  • 1514 Reply Likes
I am going to answer several questions on the topic and then go QRT on the subject for now.  Here are the facts in bullet form, which is how I think:
  1. This was not a software bug.  The software related to this setting has been the same since we released each radio model.  This is not in code that the software team would normally touch.
  2. I would call this a "discovery" because I serendipitously found a setting that increased SFDR headroom that was counter intuitive to what I thought I knew about the hardware.  
  3. I made all the adjustments manually so they are not yet in the software.  When we do update the software, it will be in the release notes.  
  4. Further testing is needed on the 6500/6700 to see if the same settings apply.  A very quick look indicates that it will apply.
  5. This is not something that alpha testers can be expected to test.  It would even be complex and expensive to do at the factory.  
  6. I agree that independent testing should provide the version number of the software/firmware.  The ARRL does this.
  7. @Rick.  This topic has been covered many times over the years but I am sure you missed it.  This is a common minsconception.  First, never trust a superhet S meter.  They are not accurately calibrated at 6 dB per S unit.  They reduce that so that you are fooled into thinking the noise is lower that it actually is.  On 40m the atmospheric noise on your antenna is actually going to be in the range of S3 to S4 in 500 Hz bandwidth - more in a SSB bandwidth.  See the chart below.  Also, when you use the 0 dB gain setting on our radios, that means literally 0 dB gain.  There is no analog gain stage in that setting, which is the most appropriate way to run a radio below 15m and sometimes even on 15m.  Add gain to lower the noise floor on 15m and above.  That's why the control is there.  
73 and QRT,
Gerald
Photo of Gerald - K5SDR

Gerald - K5SDR, Employee

  • 830 Posts
  • 1514 Reply Likes
Oops.  I forgot to recommend that everyone read the article by Joel Hallis, W1ZR, in the June 2010 QST titled, "Receiver Sensitivity -- Can You Have Too Much?"  
Gerald
Photo of Al / NN4ZZ

Al / NN4ZZ

  • 1839 Posts
  • 660 Reply Likes
Gerald,
Thanks for all the details and information. The only thing that remains a mystery to me (and I may have missed something that explains it) is why the numbers would have changed when Rob did his tests. It sounds like the settings you discovered are new.

If there are no firmware changes that account for it and no hardware changes then why did the numbers change when Rob teseted the same radio at different times?

Regards, Al / NN4ZZ
Photo of VE7ATJ

VE7ATJ

  • 125 Posts
  • 24 Reply Likes
Personally, I think this is GREAT news -- that problem was due to sub-optimal settings which can be populated to our machines via a firmware upgrade!  Way to go, Gerald and the Flex team.
So, now the question is whether the new settings will show up in 1.11 or in 2.x :-)

Don