Welcome to the new FlexRadio Community! Please review the new Community Rules and other important new Community information on the Message Board.
If you are having a problem, please refer to the product documentation or check the Help Center for known solutions.
Need technical support from FlexRadio? It's as simple as Creating a HelpDesk ticket.

What are the audio stream specifications for DAX and how do they compare to VAC in the 3000/5000?

st
st Member
edited June 2020 in SmartSDR for Windows
I understand that the native input and output sample rate for DAX connections is 24KSPS, correct? So regardless of what I make a DAX connection to, at the DAX side of the connection the audio stream will be up sampled or down sampled as required to get it to 24KSPS, correct?

What is the native DAX audio sample size, is it 16 bits/sample?

When DAX is selected, are all other digital audio processing functions bypassed (e.g. compression, etc.) on both receive and transmit?

Finally, how does this compare to the VAC connections on the legacy 3000 and 5000 radios? What is the native, internal VAC sample rate and sample size on those radios?

Thanks!

Answers

  • Steve-N5AC
    Steve-N5AC Community Manager admin
    edited February 2017
    There are two types of DAX: DAX audio and DAX IQ.  It sounds like you are talking about DAX audio so I'll focus on that.

    There are two ways to use DAX -- you can connect directly to the API in which case you will be working with 24ksps VITA-49 data streams.  You can also connect to the sound cards in the PC which have been upsampled to 48ksps.  In the native API, the data is provided as stereo IEEE-754 single precision floating point data with 0dBFS at 1.0F give or take.   So this is 64-bits-per-sample with a 23-bit mantissa.  

    > What is the native DAX audio sample size, is it 16 bits/sample?

    A little bit tricky, but the data starts its life as 16-bit data in an ADC with an ENOB of 12.1.  As the data is decimated, you swap sample rate for bit-depth.  This confuses a lot of people because they look at a 16-bit data converter and assume that it's dynamic range is 16 * 6 = 96dB.  And, in fact this is true at the Nyquist bandwidth (122.88 MHz in our case).  But through decimation, we achieve processing gain.  To understand how this works, imagine a signal right at the LSB of the 16-bits.  Each sample it bounces between 0 and 1 at 122.88Msps in a pattern of, let's say 0001 0001 0001 0001.  So every fourth sample it is a 1.  When we decimate this data by two, we now have samples that are at a 61.44Msps rate, but now we look at two samples for every one we eject from the decimator.  With the first two samples (00), we eject a 00.  With the next to samples, we see a 01 and so we eject a 01.  Notice that we are outputting 17 bits instead of 16 from the decimator -- we tacked on an extra bit and we average the bits we decimate to decide what to output.  If you imagine the decimal point in-between the 0 and 1 bit of the 17-bit number, our samples would look like this:
    INPUT: 
    0000000000000000
    0000000000000000
    0000000000000000
    0000000000000001
    OUTPUT:
    0000000000000000.0
    0000000000000000.1
    Now we can decimate one more time and add another bit.  Now when we look at the two samples from our first decimation, we see one has a 0 at the end and the other has a 1.  Since we are half-way in-between, we set the bottom bits to a 01:
    OUTPUT:
    0000000000000000.01
    In the binary system, this is equivalent to saying "1/4" of our original LSB, which is the average of the four samples that we got in which were 0, 0, 0 and 1 -- the average is 1/4.  In this way, we increase the number of bits of precision and the dynamic range, but it is considerably less data -- now we are sampling at 30.72Msps.  We went from 64-bits on the input for four samples to 18-bits on the output.

    This is not an entirely accurate model for many reasons, but it should give you an intuitive feel for how we use the input data over time to provide more bits of information at a lower speed in the output.  By the time we decimate down to 24ksps from 245760000ksps, we have gained log2(245760000/24000) = 13-bits.  So we could have 16+13=29 bits of output data, but we're told that only 12.1 are really good so this would be 12.1+13.3 = 25.4 bits of data.  The 3000/5000 have 24-bits for reference.

    The other way to access the data is after we have put it into a Microsoft Windows audio device.  You will see these enumerated in the system as audio channels with DAX RX and TX names.  This data has been upsampled to 48ksps, but is otherwise the same data.  

    For DAX audio, the DAX data replaces the microphone audio so it goes through the compressor and the modulator.  If you do not want to use the compressor, you can disable it.  To remove the modulator and just perform upconversion, select DIGU which will essentially do what you want.  The DAX audio receive data goes through the receive signal chain including all filters, DSP, etc.  Again, you can turn any of this off that you want.  DIGU also removes several of the pieces that don't make sense for digital data (NR for example).

    > Finally, how does this compare to the VAC connections on the legacy 3000 and 5000 radios? What is the native, internal VAC sample rate and sample size on those radios?

    It is essentially the same as what is done for the 3000/5000.  The 3000 and 5000 do downconversion in the analog domain via a direct conversion and the resulting data is 24-bit per sample real data which is converted to floating point in PowerSDR at different sample rates (48, 96 or 192ksps).  What you get in those radios is a wider bandwidth of data initially, but we then filter the data so if you are at 192ksps, you get a lot of data that has been filtered into a small bandwidth.

    The fidelity of the data is similar, but the 6000 has both a better receiver and transmitter and more effective bits of output.
  • Andrew VK5CV
    Andrew VK5CV Member ✭✭
    edited February 2017

    Steve,

    I have sort have asked this before but it was not picked up.

    Where is the sampling rate and frequency accuracy performance  determined for DAX?

    The initial ADC sampling clock is TCXO, OCXO or GPSDO derived.

    Does this carry through all the way to 3rd party processes?

    Andrew, de VK5CV.

     

     

  • Ken - NM9P
    Ken - NM9P Member ✭✭✭
    edited June 2020
    Thanks, Steve. I follow a little bit. Enough to understand it, but not enough to program anything with it! Another question... Does DAX input on USB/LSB go through the TX EQ and Processor now? Some previous versions did not. I.e. Do I need to pre-EQ and compress my recordings for Voice Keyer like I have been doing, or will the 6500 do that on the DAX input?
  • Steve-N5AC
    Steve-N5AC Community Manager admin
    edited December 2016
    This is a simple question with a complex answer.  There are many contributing factors to frequency accuracy and sampling rate accuracy.  Let me hum a few bars of this and see if it gets you the info you are looking for.

    First, Sampling rate and frequency accuracy are intertwined in the radio and are all contingent on the accuracy of the master clock.  It is the only clock source that matters for receive data/audio accuracy.  Both the sampling rate and the frequency accuracy are all from this single source.  The ADC sampling rate is divided from the master clock. If you are using a GPS or an external reference, they will set the base sampling rate and 

    Frequency accuracy has two factors: the first is the master clock and the second is the tuning error(s).  Tuning error comes from a difference in the actual and desired frequencies tuned.  We are currently using a 32-bit tuning word for our first mixer and the tuning word spans 245.76MHz.  Therefore, the error can be up to 245.76MHz/2^32 = 57mHz (milliHertz).  If the frequency tuned is an even divisor of 245.76MHz across 32-bits, the frequency tuned is exact.  For example, if you want to go to 14.100MHz, the tuning word is 0x0EB00000 and it is evenly divided.  If, however, you want to tune to 14.103MHz, the tuning word is 0x0EB0CCCC with a fractional discard of 0x00000000.CCCCC...  In this case, you are 45.8mHz off frequency.  For most folks, this is more than sufficient, but if you are trying to measure frequency to the nearest milliHertz, it presents a challenge.  There are three such mixers in the FLEX-6000, all in math and all designed to be down in the milliHertz range so most will never have an issue with this.  Any radio tuned with a DDS or CORDIC will be similar and generally only affects folks that are doing scientific research at the milliHertz level, etc. would be affected (notwithstanding FMT).
  • Steve-N5AC
    Steve-N5AC Community Manager admin
    edited December 2016
    The equalizer is actually implemented in the CODEC for transmit and is only on mic audio today.  So if you want equalization done, it needs to be done before the audio is provided to DAX.
  • K6OZY
    K6OZY Member ✭✭
    edited September 2014
    OMG these replies are amazing.....
  • Ken - NM9P
    Ken - NM9P Member ✭✭✭
    edited December 2016
    That is what I thought, but I wanted to make sure nothing has changed since the last version. Thanks.
  • Andrew VK5CV
    Andrew VK5CV Member ✭✭
    edited November 2015
    Thanks again Steve, If I can squeeze a bit more out of you, does DAX sample rate at 48K derive from the master clock? When I use a sample rate checker on a DAX sound card and don't get exactly 48,000 is DAX or the checker off? When I measure the 10MHz GPSDO I do see the error of about 40mHz with the slice at 9.999MHz for 1KHz offset. I do have an interest in FMT on groundwave but this error is fine for ionospheric paths. Is there an easy way to work out these even tuning words close to 9.999MHz? Andrew
  • st
    st Member
    edited September 2014
    I am the original poster. Thank you very much for the comprehensive reply, Steve. I'm also glad to see it has generated some interesting additional discussion.

    st
  • Steve-N5AC
    Steve-N5AC Community Manager admin
    edited December 2016
    We actually have all the information to report the exact tuned frequency assuming that the reference is "exactly on frequency" (no such thing, I know).  We have not scheduled this effort as we've felt that there are a number of more important things to complete first, but it could be done in the future.  We've also looked into a special load of software that would provide obscene tuning word sizes (such as 96-bit).  This would increase the accuracy to well beyond what you could probably measure.

    For the 48ksps output on a Windows PC, we are now at the mercy of Windows and the clock in your computer.  Windows will use the computer's clock to resample the 24ksps data we give it and there will be artifacts as a result.  PCs are notorious for sampling rates wandering all over the place.  If you need exact results, stick with the data out of the radio direct without resampling.  We do this with DAX for convenience until a better standard than a Windows sound card is used for digital IF and audio data.  We would like to see VITA-49 as the standard since it was created for this purpose.

    Finally, I have not looked at in detail, but I think you will have a hard time working out the exact frequency.  I've talked with some of the other FMT guys about this and the deal is this: there are several mixers involved and we don't, today, keep track of where they are all tuned for exact frequency reporting.  If we change the frequency in the radio's main mixers we send SPI commands to the FPGA.  To prevent doing this all the time, we place another mixer in software that tuned inside the 12kHz window of each slice receiver.  If you move a few kHz, we adjust in software.  When you hit a boundary, we retune the main mixer and re-center the software mixer.  Essentially, there are a number of moving parts and we would have to tell you what each are doing and what the error on each is.
  • Stan VA7NF
    Stan VA7NF Member ✭✭✭
    edited December 2016

    Steve, I see a bit of "proud parent" in this reply, and I really enjoy the details.

    Last month there was a thread about tools and a separate load of GUI and firmware(?) for Lab type features. 

    As you call it, a "Science Project" is in the making.  
      * - FMT that make the Flex the lab measuring device, not the measured device
      * - A time domain feature it could find where in the 50M coax a pin was inserted; to the millimeter. Fortunately I don't have friends that would do that, (any more - SK).

    As an aside, the first software load I have is dated 12/20/2013 (9 months ago).  SmartSDR_Beta_v1.0.24.A49EAB51_Installer.exe.  Praise to all the development team  and alpha testers that made it so, and to those who continue to make this great radio.  Do we start preparing for the anniversary party?

Leave a Comment

Rich Text Editor. To edit a paragraph's style, hit tab to get to the paragraph menu. From there you will be able to pick one style. Nothing defaults to paragraph. An inline formatting menu will show up when you select text. Hit tab to get into that menu. Some elements, such as rich link embeds, images, loading indicators, and error messages may get inserted into the editor. You may navigate to these using the arrow keys inside of the editor and delete them with the delete or backspace key.