SmartSDR v3.8.20 and the SmartSDR v3.8.20 Release Notes
SmartSDR v2.12.1 and the SmartSDR v2.12.1 Release Notes
Power Genius XL Utility v3.8.9 and the Power Genius XL Release Notes v3.8.9
Tuner Genius XL Utility v1.2.11 and the Tuner Genius XL Release Notes v1.2.11
Antenna Genius Utility v4.1.8
Need technical support from FlexRadio? It's as simple as Creating a HelpDesk ticket.
New Sherwood 6700 and 6300 tests
Answers
-
Andrew - to answer your question, those tests purpose is exactly what this entire thread is about - quibbling over trivialities.
Wanna know what I like the flex for? Not some 1 db difference over something, but the layout allows me to scope out entire bands, the interfacing to different programs to other software that I use is seamless.
I wonder what the spec is for that?
4 -
I am sorry to be so late to this thread but I was on a 4 day trip to a hamfest and customer meetings in Canada.
The bottom line is that I believe that the used FLEX-6300 Rob tested is most likely defective.
I have never seen a newly manufactured radio of any model in our 14 year history that measured in this IMD range. Below are the ARRL Lab measurements published in the April 2015 issue of QST for the FLEX-6300. The ARRL Lab blind purchased a 6700 and 6300 for their review. You can see that the ARRL's numbers for the 6300 are 10 dB higher than Rob's numbers even on 6m. Our typical lab measurements agree with the ARRL numbers for the FLEX-6300.
Out of courtesy, the ARRL gives every manufacturer the opportunity to review test data and comment if there are material discrepancies between the lab results and expected performance. This has been my experience with Rob in the past as well.
On March 6th, Rob sent me an email saying that he was seeing some "strange non-monotonic IMD data" in his measurements on the used FLEX-6300 he was testing. Rob provided no specific measurement data. He asked if I would like to have the unit sent back for evaluation. I said, yes.
On March 7th we provided a return label so that we could take a look at the problem. Rob did not ship the unit until March 14 and it just arrived this afternoon. I have not had a chance to look at the unit yet. Rob published the data without any opportunity to review the data or to test the radio to see if there is a component failure. There are a number of components in the signal chain that can be degraded in performance due to partial ESD damage for example. It would have been helpful to have the actual data to review and then to be able to verify that hardware is in proper working order since the radio was second hand.
I have not read this entire thread so there may be other comments are questions that I have not had time to address. I wanted to put the facts out in the public on this issue first.
73,
Gerald5 -
Well this is encouraging news! Hope you had a good trip.
0 -
There is an interest to learn why there was a second test on just two radios on the 10M band only??0
-
I was thinking that such a sudden demotion of the 6300 must be the result of some kind of anomaly in the rig. It isn't THAT far behind the 6700/6500!
I would assume that once you get this cleared up, Rob will amend the report?0 -
Why would anyone test a used radio and publish questionable results without contacting the manufacturer?
Puts the entire testing program in question. I ran an electronics manufacturing company for many years--this is certainly not the way high quality/accuracy testing is performed. Very strange.
Larry, W1IZZ1 -
Yes...phase noise on both Receive AND Transmit are very important these days.
I can tell the difference in phase noise on two local KW hams - each less than a mile away. One uses the Kenwood TS-850SAT that I sold to him after I got the 6500 and runs it into a TL922 amp and TH-11DX beam. The other uses an Icom IC-735 into an AL80 and dipoles/verticals.
The "foootprint" left by the IC-735 is MUCH wider on my panadapter, and the tailing edges of phase noise remnants raise the noise floor for a much wider bandwidth.
I can operate much closer in frequency to the stronger 850 station than I can the 735 station, because the phase noise transmitted by 735 is much worse, even on CW or when he is tuning up.
(BTW. my friend with my old 850 is close enough and has enough antenna gain that he can hear me at S-9 when I am transmitting on 10 meters with only the output of my Transverter port when our antennas are pointed correctly! )0 -
I'd like to make a suggestion to any of the engineers on the list who have a deep understanding of the nature of the tests that Rob performs on the receivers:
I think it would be extremely useful to the amateur radio community if someone created a "Sherwood Simulator", a program that would allow the user to easily simulate different performance characteristics of the signal chain in various architectures and under various signal environments.
For example, it would be nice to be able to hear band noise and signals at an idyllic rural location with a perfect receiver, and then switch between simulations of *actual* receivers for the same signal environment. Maybe a station 400 Hz away with a 500 μV signal, and another 1 KHz away with a 50 mV signal, while trying to copy a 3.2 μV signal.
It would be very informative to be able to tinker with AGC parameters, receiver architectures, and signal environments to allow hams to easily determine which architectures and performance requirements matter most to their needs, and also to help dispel myths about one architecture vs another.
I'm not sure how accurate such a simulation would end up being, but I think it might help provide the average ham with a more intuitive foundation for RX performance metrics.
73,
Matt NQ6N
2 -
Thank you Gerald. The other concern is the 10 meter performance of the 6700.0
-
Hi Gerald,
Rob mentioned that he is testing another 6300 this week. So that will be another data point as well as what you find on the radio he sent in for review.
There is also a question about whether any of the firmware updates could have affected how the results. The numbers with the preamp ON vs OFF have changed since his earlier tests. Do you or Steve have any thoughts on that possibility?
Regards, Al / NN4ZZ
al (at) nn4zz (dot) com
SSDR / DAX / CAT/ 6700 - V 1.10.16
Win10
0 -
Al,
This is all new information to me without any time to analyze. We have been building the 6500/6700 for four years and the 6300 for three years and this is the first time I have heard anything of this. Everything would be speculation without analysis.
Gerald4 -
Mr. Gray (above) said it best, I think: "Why would anyone test a used radio and publish questionable results without contacting the manufacturer?" It feels like there's a piece of the puzzle missing. Mr. Sherwood is generally known for the quality (rigor and reliability) of his testing. Testing a used radio, getting unusual results, and then simply publishing the results as being representative of that model radio is very clearly not best engineering practice. Something's not rigGht. Me thinks there must be more to the story. Peter K1PGV0
-
An update from Rob --
I borrowed a local 6300 from one of the Boulder Amateur Radio Club members, as we had a meeting tonight, and ran it though the lab this evening. This second sample measures 3 dB better than my January measurements on the “used” 6300 provided by Gerald. Something degraded when I retested the Flex-provided 6300 in March. Gerald will evaluate what is wrong with the “used” 6300 in the coming days.My website has been updated as of Tuesday night. There is more variation than I would like to see, considering the sample from Flex, the one Adam tested in 2015, and the one I borrowed tonight.
Al's comment -- As noted earlier, Rob is known for his quality and impartial testing. We should wait to see what Gerald and FRS find out about the radio Rob returned to help get to the root cause of the variability. The same radio tested differently over time, and also different from the one Adam Farson tested. Is it component variation, component degradation, a firmware difference, or something else.
Regards, Al / NN4ZZ
al (at) nn4zz (dot) com
SSDR / DAX / CAT/ 6700 - V 1.10.16
Win10
0 -
I read elsewhere that Sherwood did a test today using another 6300 and could not replicate the findings he published 3 days ago.0
-
Hi Andrew,
Was that the test I mentioned above in the "update from Rob." He ran it last night (Tuesday 21-Mar) and posted the results last night. (or very early this morning).
Or do you think there was another test run today (Wednesday)?
As mentioned above he is seeing some variability in the results.
Regards, Al / NN4ZZ
al (at) nn4zz (dot) com
SSDR / DAX / CAT/ 6700 - V 1.10.16
Win10
0 -
It was the test based on a 6300 that a friend loaned him. I may have assumed it was today but could have been Tuesday.0
-
I would like to state that I personally trust Rob Sherwood's integrity. I believe him to be an honest and sincere person. While I may not always agree with Rob on every subject, I respect him a great deal. Any question of his integrity on this forum is not appropriate. With regard to this topic, we need some more time to investigate using an engineering approach rather than anecdotal information.
Some of you have heard me give a talk at various ham events titled, "Grokking Receiver Performance." One of my slides shows Rob's receiver performance web page with the 6700 at the top of the chart. I tell the audience that since we are at the top, I can say that the chart has reached the point where it is meaningless and the wrong focus. It has outlived its original purpose. Above about 90 dB or so of IMD DR3 it doesn't really matter. At 100 dB a dB or so more is purely ego and has virtually no impact in the real world. It is called the law of diminishing returns.
To quote Rob,Once the DR3 is 85 dB or better, we are going to be fine in a contest / DX pile-up MOST of the time.
1 -
I do not understand testing used equipment in unknown condition, obtaining questionable results, and publishing those results. Then obtaining another used piece of equipment in unknown condition and testing it to verify the results.
If the testing is to mean anything at all (which is highly questionable), all testing should be done with new equipment. Unusual results should never be published until verified and there is a discussion with the manufacturer.
I'm sure that the person doing the testing is honest. However, testing used equipment in unknown condition is not an appropriate testing methodology. Publishing unusual results before verification and discussion is also not an appropriate testing methodology.
The Sherwood results make no difference to me. I didn't purchase Flex equipment because they were number 1 or number 12--it doesn't really matter in the real world. I purchased Flex because I can see the whole band. I can have 4 slices open simultaneously, running Skimmer on each one...... I'm not going to run out and buy a new rig because it is now #1 on the Sherwood chart, particularly given this example of the testing methodology.
Larry, W1IZZ
0 -
>"I do not understand testing used equipment in unknown condition, obtaining questionable results, and publishing those results. Then obtaining another used piece of equipment in unknown condition and testing it to verify the results. If the testing is to mean anything at all (which is highly questionable), all testing should be done with new equipment. Unusual results should never be published until verified and there is a discussion with the manufacturer. I'm sure that the person doing the testing is honest. However, testing used equipment in unknown condition is not an appropriate testing methodology. Publishing unusual results before verification and discussion is also not an appropriate testing methodology."
I disagree. It's Sherwood's prerogative to test any way he wants. He needs no permission from any of us to release his findings.
Like Sherwood, we can test and publish just as he does. However, few folks have the test equipment and wherewithal to perform these complex performance tests. He's testing based on what he deems important. We can accept his result - or reject it.
Sherwood clearly discloses the fact that the recently tested units were a second sample. I fail to see why the performance data should change whether the tested unit is new or used. Me, I would rather see a sample of used units after they have had an opportunity to age. I am less concerned about the performance of one new unit than a string of used units that have been in the field for a long time. I am also more concerned about variability of performance across multiple units whether or not testing occurs across new or used units. We should all be asking why there's a 12 dB close-in DR variance in Sherwood's recently tested 6700 on 10m. It's a question, not an attack.
I would like to see Sherwood and Flex come together with a unified answer. Was a small set-up detail missed in testing? Were the units faulty from production? Did performance degrade over time? These are the important matters -- not Sherwood's reputation, not his decision to test used equipment, and not whether he needed to obtain anyone's permission to publish his data.
Paul, W9AC
4 -
I feel pretty pretty ignorant when it comes to trying to get a through understanding of what each of the columns really mean and I'm sure I'm not the only one. Until I read through the comments here, I had no idea what the "Dynamic Range Narrow Spaced (dB) column really meant and I had no idea that anything over about 85 was pretty much undetectable by the human ears. That type of information is invaluable.
It would really be nice if there was a SIMPLE explanation of each column using the column titles and not some other name that means the same thing, how the data in the column applies to the everyday operation of a transceiver (practical application), whether a high number or low number is better, and what range is totally acceptable (general use / contesting). Like the Dynamic range where anything over 85 is fluff for the average operator. I never new this and have learned a bit just reading the comments on this thread.
1 -
Norm;
Here is a link to a great explanation of Reciprocal Mixing Dynamic Range (RMDR) by ARRL's Bob Allison. His explanation includes a real life example in a single sentence that puts it into context! Once you read this description, you can go back to the Sherwood's data and compare today's receivers with yesteryear's. It was a real eye opener for the old timers when I did a presentation for our local club. http://www.arrl.org/forum/topics/view/177
0 -
The mere fact that SDRs have come into the picture complicates the process.
One might possibly expect radios ( a "new" radio with each release) to actually
improve. It would be nice to see that happen and be verified by a highly regarded
outside source. Do not shoot the messenger. Test and verify.
Ned, K1NJ
0 -
I agree with you and I suspect a high percentage of operators look at those charts from the standpoint of whose in 1st place, 2nd place etc. I too would like a less technical explanation of the numbers in those tables. I understand a few maybe, but I'd like to know more.0
-
Noise Floor - lowest noise level on the radio (includes receiver noise). This is basically the baseline noise level when the rig is not connected to any signal source (antenna or signal generator). Lower is better meaning that the receiver noise would not cover up weak signals.
AGC threshold - The lowest signal level at which AGC activates and compresses the signal toward the target (measured in μV and dB). Lower is better. AGC being able to engage on weaker signals is better.
100kHz Blocking -
Sensitivity - basically same as #1, but tells you what is the lowest level signal you can hear.
LO noise - local oscillator noise in dBc per Hz.
Spacing - what it is measured at. 10kHz is common but Rob tests at 50 for most rigs as well.
Filter Ultimate dB - measurement of strong signals leaking through the band stop of the filter
Dynamic Range Wide - difference between highest and lowest signals over a wide range of frequencies (20kHz usually)
Dynamic Range Narrow - difference between highest and lowest signals over a narrow range of frequencies (2kHz).
For the last two, higher is better, because more dynamic range means that the rig can receive both strong and weak signals at the same time.
The last one is used to rank the receivers and is basically a measurement of how it will perform in crowded band conditions. It is important if you're trying to dig out weak signals when strong signals are nearby
All of this really boils down to two things you do with a receiver:Receive weak signals, period
and
Receive weak signals in crowded band conditions (contest and DX)
Any receiver really can pick out a weak signal if the band is quiet and there's no one else. However if you have many strong signals, they will drown out the weak ones, even if they're not on the same frequency. This is where dynamic range comes in.
For rag chewing or working strong signals it matters little.
Ria3 -
Thank you very much Ria! I will print this out and keep it handy.
73
Rick, W2JAZ1 -
Rob has a detailed explanation for his table and some background why he sorts it on 2 kHz dynamic range. It is available in DOC or PDF format and the links are at the top of the table. Also here is a link to the PDF version.
http://www.sherweng.com/documents/Terms%20Explained%20for%20the%20Sherwood%20Table%20of%20Receiver%2...
Regards, Al / NN4ZZ
al (at) nn4zz (dot) com
SSDR / DAX / CAT/ 6700 - V 1.10.16
Win10
2 -
Hadn't noticed that.
I did forget 100dB BDR which is basically the difference between the minimum discernable signal and a signal off frequency that causes 1dB gain compression. This essentially is another factor in weak signals.0 -
Here is a great TUTORIAL on Youtube where you can see what a lot of these measures are on a spectrum analyzer
What is a dB, dBm, dBu, dBc, etc. on a Spectrum Analyzer?
3 -
Many are so into specs, as though they mean you should or not buy a certain radio. The specs only tell a small part of what makes a radio a really good radio. Consider a car magazine, say Car and Driver. They publish the car specs, like 0 to 60 in seconds, 1/4 mile. But they also talk about ride, handling, driver comfort, and controls. Visibility and options. Other than specs on our radios, what makes it stand out from others as a user? See, the specs do not tell the real tangibles.
Sherwood only deals with radio specs. But what if he discussed things like, for SDRs, things like fit and finish of the radio, is the software easy to learn and use? Is the software well implemented, well writen? does it show the operator everything they need to know at any moment? And how about the user living with it day to day? And what is the customer support like?
So, to me specs are interesting, but have little to do with my actual enjoyment.0 -
Perhaps the tubes were weak on the 6300.0
Leave a Comment
Categories
- All Categories
- 260 Community Topics
- 2.1K New Ideas
- 538 The Flea Market
- 7.6K Software
- 6K SmartSDR for Windows
- 147 SmartSDR for Maestro and M models
- 367 SmartSDR for Mac
- 242 SmartSDR for iOS
- 236 SmartSDR CAT
- 175 DAX
- 345 SmartSDR API
- 8.8K Radios and Accessories
- 7K FLEX-6000 Signature Series
- 43 FLEX-8000 Signature Series
- 859 Maestro
- 43 FlexControl
- 837 FLEX Series (Legacy) Radios
- 807 Genius Products
- 424 Power Genius XL Amplifier
- 280 Tuner Genius XL
- 87 Antenna Genius
- 227 Shack Infrastructure
- 153 Networking
- 409 Remote Operation (SmartLink)
- 119 Contesting
- 639 Peripherals & Station Integration
- 116 Amateur Radio Interests
- 821 Third-Party Software