This is an update to an earlier blog post describing a moderate power 2200m class E amplifier with very low drive requirement. The design had been evolving for some time but is now in a finished state as far as I am concerned. In its current configuration I have many hours on this little amplifier running 250 to 275 watts RF output, including numerous nights running 80% or higher duty cycle for hours at a time. It has proven to be very reliable. The basic design requires just 0 dBm drive (one milliwatt), but I have included a built in 20 dB attenuator in mine to accommodate the +20 to +24 dBm drive provided by my various exciters.
Use caution when selecting capacitors for the output circuit, namely C1 through C4. It may be very tempting to use a single capacitor of the specified value, but doing so will likely mean operating the capacitor beyond its voltage ratings if it is a film capacitor. Film capacitors must have their voltage derated as frequency increases. Capacitor data sheets usually have curves for this derating. I had capacitors in a 630 meter amplifier fail because I had not taken that into account. Generally smaller value capacitors can handle more voltage at a given frequency than higher value ones, which is why I use several low value capacitors in parallel to reach the desired capacitance.
When selecting a FET, choose something rated 200 volts or more if you plan to run this amplifier at full power. Voltage peaks at the FET drain are about 3.5 times the applied DC voltage. So with 40 VDC on it, the FET is going to see a peak voltage around 140 volts on every RF cycle.
The FET requires good heat sinking. I prefer either directly mounting the FET with a bit of thermal grease to a heat sink isolated from ground (note the heatsink will have drain voltage and RF on it, so be careful what might come into contact with it) or a mica insulator with thermal grease for a normal grounded heat sink. I do not recommend using greaseless Sil-pad thermal pads as they may be unable to provide adequate cooling efficiency. The heat sink on my amplifier is about 5 x 3 x 1.5 inches. A fan on the heatsink is not required for low duty cycle such as two minute WSPR transmissions at 33% or lower duty cycle. For long T/R period modes or frequent transmission resulting in high duty cycle, you will need either a larger heatsink or a fan. I also have a fan on the bottom pushing air into the amplifier and air exhaust vents on the other end. Again, this is not needed for short transmissions of low duty cycle but if you are going to run 15 or 30 minute modes or very frequent transmissions, it will be necessary to supply cooling air to L1 and L2. I also have an internal fan assuring high volume air flow across those inductors, though that is probably not needed. It was there to move air across the inductors before I added the bottom cover and intake fan, and I didn’t bother removing it.
As noted in the previous post, this amplifier was constructed by making “islands” in a solid copper plated PC board using a Dremel rotary tool. Other methods will surely work as well.
The only future change I might make is to replace the little TO-220 size 34N20 FET with a FDA59N30 in the larger TO-3P package. I have not experienced any FET failures with the present configuration but I like the larger FETs for better cooling efficiency.
The story of this amplifier starts back in 2017 when I held a FCC Part 5 experimental license (WI2XTC). This was prior to FCC granting amateur privileges on 2200 and 630 meters. I was looking for a kilowatt-class amplifier that seemed reasonably easy and inexpensive to reproduce. I settled on the W1VD kilowatt class D design.
After building the amplifier I had a lot of problems with blown FETs. After many months of testing, troubleshooting and trying various things, I got that problem under control for the most part. It turns out this was not a problem with the design or my construction, but simply that modern modes and operating practices are not consistent with the intent of the design. The amplifier was designed for a steady RF signal at its input, with transmission to start and stop by using one of the keying inputs to enable and disable the FET driver IC. That just isn’t how modern computer generated modes work. The software usually provides for PTT (amplifier keying), but it is the opposite of what would be needed to keep the amplifier happy. PTT is asserted before RF generation starts and held until after RF stops. The amplifier needed the opposite to be safe: PTT asserted after the start of RF generation and released before RF stops.
Initially I was experiencing frequent FET failures with any exciter I used, but they were far more common, in fact almost guaranteed using an exciter capable of amplitude shaping the start and/or end of the RF envelope. I don’t have a storage oscilloscope, but after seeing some FET drain waveforms provided by other users of the same amplifier it was apparent there were (or could be) voltage spikes exceeding the 200 volt rating of the FQP34N20 FETs at the start and especially end of a transmission. Additionally there appeared to be extended times of zero voltage on one pair of FETs or the other, possibly indicating a longer than normal on time. One might wonder if there were current surges occurring at those times. After a lengthy search for replacement FETs rated for higher voltage and current but otherwise having similar ratings to the FQP34N20, I tried the FDA59N30. That eliminated any blowing of FETs with exciters or modes that are not capable of RF envelope shaping, such as the QRP Labs Ultimate 3S which I use extensively. I had just one FET failure in more than a year of operation. It appeared that may have been due to overheating. I found the mounting screw on the failed FET was not tight. Both the mounting surface of the FET and the Sil-Pad underneath were discolored in a way that suggested excessive heating. The FDA59N30 is a current production part while the FQP34N20 is long discontinued and becoming very hard to find except from some overseas sources which are selling counterfeit devices.
During the summer of 2020 I was invited to join the early testing team for the new FST4 and FST4W modes being developed for use at LF and MF. It was one of the better things to happen in 2020! Initially I was able to run these modes using the phasing exciter but I noticed some peculiar glitches on the scopematch (sophisticated RF power and SWR monitor using an oscilloscope) at the start and particularly at the end of transmissions. I also had some intermittent problems with amplitude and phase fluctuations during FST4/W transmissions. Eventually while looking for the source of that problem, I discovered the IR2110 FET driver was not entirely healthy. One side was OK but the other was providing only weak gate drive to the FETs. I replaced the driver and that was the end of being able to transmit using the new modes! FETs were constantly meeting their demise with the new modes, while other modes were OK. My pile of dead FETs was again growing rapidly! At about the same time I learned something about the new modes that I had not previously known. They were intentionally using envelope shaping at the beginning and end of transmissions! (Note: with the general availability release of WSJT-X 2.3.0, the FST4/W envelope shaping can be disabled.) Sure enough, when I tried manually enabling the FET driver after the envelope shaping at the start and disabling it before RF shaping commenced at the end of transmissions, FETs did not fail.
It has never been clear to me exactly where in the amplifier problems start with non-constant amplitude drive, but clearly bad things were happening somewhere. Was it strictly in the output circuit, or was something going wrong in the driver or the pre-driver logic? It would be fair to say I was never entirely happy with the configuration of the amplifier anyway. Since it used a flip-flop to clock the IR2110 FET driver, it required the RF input signal to be at twice the operating frequency. For 137.5 kHz operation, it needed RF drive at 275 kHz. In order to achieve that with common exciters and relatively minimum hassle, I used a frequency doubler circuit before the driver. That always seemed like unnecessary complexity to me, but at the time of construction there were few, if any alternate driver designs for class D amplifiers that didn’t use a flip-flop, therefore requiring drive at twice the operating frequency. The doubler also caused some problems running EbNaut, which uses 180 degree phase shift keying.
While struggling to think of a solution to the FST4/W envelope shaping killing FETs en masse, it came to my attention that others were now using drivers for class D amplifiers that did not use the flip-flop and worked with “normal” drive at the operating frequency. I decided to try an experiment. Melding elements from three different designs, I came up with a driver that provided all of the control inputs of the original, required no doubler and allowed for some adjustment of the duty cycle. The circuit uses a 1:9 impedance step up transformer driving a pair of LM311 comparators. The comparator outputs control the IR2110 FET driver. One obvious advantage is that this does not require drive at twice the operating frequency. Another is that, unlike the original driver configuration this one allows for some adjustment of the length of the drive pulses to the FETs. This made it possible to get cleaner drain waveforms with less high frequency ringing.
With the original driver there was always some high frequency drain ringing. With this driver it can be almost entirely eliminated by adjusting the 20 ohm trimmer to vary the duty cycle or length of drive pulses. There is a tradeoff between the circuits. The original amplifier input consisted of a frequency doubler and the flip-flop preceding the IR2110. With that configuration there was little to no change in drain waveforms over a 15 dB drive power range. With the new circuit, drain waveforms change with drive level. The change is minimal over about a 6 dB range but increases outside those limits. The range of acceptable input can be pushed to 15 dB before things start looking really alarming. This worked fine for all but FST4/W modes (prior to the 2.3.0 GA release with envelope shaping disabled). With FST4/W slowly rising from zero to full power at the start of a transmission and slowly decreasing from full power to zero at the end, the drain waveforms went through some ugly periods. I was still occasionally losing FETs.
I wondered if an RF sensing circuit could solve that problem. It should be possible to sample the incoming RF drive, rectify it and use the resulting DC voltage to control a comparator which would enable the IR2110 only after drive had reached a safe amplitude. The question I had was would it be fast enough to disable the FET driver at the end while the envelope was decaying. If it was too slow, it might not disable the driver before the amplitude reached a low enough level to cause problems. Never blindly trust my math or circuit design skills but by my reasoning it looked possible. The envelope shaping occurs over approximately 2.5 seconds for a FST4/W 1800 second transmission. I believe it scales linearly with the T/R period, so for the 15 second transmission it should be about .02 second. There should be plenty of time to shut things down if I used an RC time constant of about .0002 second. Instead of rambling though all of my rough calculations let me just say I tested the circuit as built with several hours of FST4-15 transmissions, which would require the fastest timing. There were no glitches evident and no FETs were harmed during the test.
I built the new driver circuit on a board which was the same size as the original and used the same connectors, so it was a drop in replacement for the amplifier. Similarly the RF sensing and control circuit is a drop in replacement for the no longer needed frequency doubler.
The remainder of the amplifier remains mostly unchanged from the W1VD circuit except for the substitution of FETs as discussed earlier. It should be noted that for power levels above 400 to 500 watts at high duty cycle, a small fan cooling the output transformer is a good idea.
The power supply is about as simple (and efficient) as you can get, a few luxuries notwithstanding. It is unregulated, consisting of a variac, transformer, two bridge rectifiers and two large filter capacitors. It can provide 0 to 50 volts for the FET drains. The power transformer has two secondaries. Each has its own rectifier and filter capacitor. The two are combined at the output terminals of the supply. A small fan is used to blow air across the bridge rectifiers to aid in cooling. Because the filter capacitors are large and the transformer resistance is low, a soft start circuit is used to prevent inrush current problems. There is a separate transformer with a similar configuration to supply 12 volts to the amplifier driver circuits. The 12 volt supply also controls the soft start by means of a comparator which closes a relay to short out a resistor in the AC input to the variac after a short (adjustable) delay.
During the late summer and autumn of 2020 I built a K9AY loop, hoping it would help me hear DX on 2200 meters. Computer modeling suggested the minimum size for good front to back ratio and overall pattern would be twice the size of the original 160/80 meter K9AY loop design. This required a 50 foot mast. I chose to use a fiberglass mast to ensure there would be no interaction with the antenna. Since the “gain” of this antenna at 137 kHz is -55 dB, I was worried about common mode noise ingress. In an effort to minimize any such problems, transformer coupling was used at both ends of the coaxial cable feeding the antenna.
Having limited space I was not sure how successful this project would be. The K9AY would have to be located within 50 feet of my 2200 meter transmitting antenna, over the 160/630/2200 meter radial field, no more than 50 feet from one of the towers and just a bit over 100 feet from the other. That is not an ideal environment for a small directional receiving antenna!
The best location, considering other antennas, seemed to be atop a small mound in the back yard. I immediately had misgivings about that, since I knew the origin of that mound. It was what was left after the lawn area was flattened with a bulldozer about 45 years ago. At the time there was an automobile junk yard next door, spilling over onto this property which was owned by the same party. I had no idea what I might find when I tried to dig a hole to put in concrete for the mast footing! In the first several inches, I encountered several strands of old barbed wire. Lovely! Next was a power steering pump and a water pump. At about the two foot level the real challenge presented itself: a buried concrete slab several inches thick, obscuring about two thirds of my hole area, and tilted at a 30 degree angle with respect to horizontal. Oh, great! It took hours of beating on that slab with a heavy steel bar to break it up and continue excavation. Digging a four foot deep hole 18 inches in diameter with nothing more than a spade is always fun, but I got the job done. It has been suggested on several occasions that I am “determined”. I think that is a nice way of calling me stubborn! But it fits.
When the antenna became operational, front to back was no better than 3 to 6 dB. Some quick experimentation showed that de-resonating the 2200 meter transmitting antenna improved the situation greatly. With that change I could often see 15 dB front to back but not always. Several methods for de-resonating were tried, but it turns out simply disconnecting the bottom of the loading coil/variometer from the secondary of the toroidal impedance matching transformer is as effective as any other method. I modified my station so that I could do that from the operating position and even have the antenna automatically resonated while transmitting and de-resonated while receiving.
Over several weeks it became apparent the antenna’s performance was not stable. The pattern seemed to improve and worsen with environmental factors such as temperature and snow cover. Several other K9AY loop users suggested improving my ground system might help stabilize it but with snow already on the ground I decided that would not be practical until spring. I decided to modify the K9AY to use a vactrol instead of a fixed resistor for the termination. A vactrol is essentially a voltage variable resistor consisting of a LED and a photocell in a small four lead package. I obtained a VTL5C4 vactrol made by Xvive and installed it on the K9AY. Additional control conductors were run to the antenna so I could control the termination resistance remotely from the operating position. This change has thus far allowed achieving at least 17 dB front to back using sky wave signals as a reference on any given night. There have been times when I see more than 30 dB front to back on DX signals. I have no explanation for that, since the computer model suggested a maximum of 17.5 dB. Front to back often undergoes short term changes which I suspect are due to changing vertical arrival angle of signals, possibly with some contribution from skew path signals if that phenomenon exists on 2200 meters. Skew path is common on 160 meters. Termination resistance typically requires adjustment with major temperature changes and after significant snowfall events.
So, with those changes made, how does it work? Better than expected! I have been comparing antennas by listening simultaneously on both using identical receivers feeding identical sound interfaces on the same computer. I am using six instances of WSJT-X monitoring three modes: WSPR2, FST4W-120, and FST4W-1800. SNR as reported by WSJT-X is recorded for every signal received and each antenna it is received with. From that data, the following results have been extracted and calculated. The method is not perfect as there is uncertainty in the reported SNR, especially with weak signals near the decoding threshold. However it is the most practical method to get a reasonable comparison.
Before getting into the results, I should point out that having the new directional antenna has confirmed something I already suspected: I have more man made noise to the southwest/west than to the northeast/east. This means I get a bigger advantage from the K9AY loop when listening to signals from the northeast, which puts many of my local noise sources off the back. Any advantage when listening southwest is largely nullified by the fact that my local noise mostly comes from that direction. During the day, when atmospheric noise is not a factor, my noise floor increases between 2 and 5 dB in the southwest direction compared to northeast. In addition to this increase in the overall noise floor, a number of “interference lines” and some narrow smears can be seen.
Results from the night of 22/23 January, 2021: With the K9AY loop listening northeast, a total of 35 transmissions from European stations were received. Of those, 21 were decoded only on the K9AY loop, while 14 were decoded both on the K9AY and the LNV. Of the latter 14, signal to noise ratio was always better on the K9AY, the improvement ranging between 3 and 7 dB for an average of 4.3 dB. While listening southwest, a total of 47 transmissions from stations in that general direction were received. Of those, 45 were decoded on both antennas with an average advantage of 0.3 dB to the K9AY. One transmission was decoded only using the LNV and one using only the K9AY.
Results from the night of 23/24 January, 2021: Listening northeast, a total of 56 transmissions from European stations were decoded; 25 only on the K9AY and 31 on both antennas. Of the 31, S/N ranged from 2 to 7 dB better on the K9AY for an average of 4.0 dB. Listening southwest, a total of 66 transmissions were received from stations in that direction; 62 on both antennas with an average advantage of 0.2 dB to the K9AY, 3 only on the LNV and 1 only on the K9AY.
Results from the night of 24/25 January, 2021: Listening northeast, a total of 89 transmissions from European stations were decoded, 45 only on the K9AY and 44 on both antennas. Of the 44, S/N ranged from 1 to 11 dB better on the K9AY for an average of 5.5 dB. The k9AY gained greater advantage later in the period. This may have been due in part to increasing static from storms over the central U.S. Listening southwest, a total of 12 transmissions were received from stations in that direction. All were decoded on both antennas with an average advantage of 0.3 dB to the K9AY.
Results from the night of 25/26 January, 2021: Listening northeast, a total of 17 transmissions from European stations were decoded; 7 only on the K9AY and 10 on both antennas. Of the 10, S/N ranged from 2 to 6 dB better on the K9AY for an average of 4.0 dB. Listening southwest, just one transmission was decoded, and only on the K9AY. However, it was a good one, AX4YB (VK4YB with a special prefix for Australia Day).
Results from the night of 26/27 January, 2021: Listening northeast, a total of 6 transmissions from European stations were decoded; 1 only on the K9AY and 5 on both antennas. Of the 5, S/N ranged from 1 to 5 dB better on the K9AY for an average of 3.6 dB. Listening southwest, a total of 18 transmissions were received from stations in that direction; all were received with both antennas with an average advantage of 0.3 dB to the LNV.
Results from the night of 27/28 January, 2021: Listening northeast, a total of 27 transmissions from European stations were decoded; 6 only on the K9AY and 21 on both antennas. Of the 21, S/N ranged from 2 to 6 dB better on the K9AY for an average of 2.8 dB. Listening southwest, a total of 49 transmissions were received from stations in that direction; 45 on both antennas with an average advantage of 0.4 dB to the K9AY, 1 only on the LNV and 3 only on the K9AY.
Results from the night of 28/29 January, 2021: On this night my local noise was somewhat lower than in previous nights, which may have contributed to slightly different results. Listening northeast, a total of 24 transmissions from European stations were decoded; 7 only on the K9AY, 1 only on the LNV and 16 on both antennas. Of the 16, S/N ranged from 0 to 4 dB better on the K9AY for an average of 2.3 dB. Listening southwest, a total of 47 transmissions were received from stations in that direction; 44 on both antennas with an average advantage of 0.6 dB to the K9AY, 3 only on the K9AY. VK4YB was received twice on each antenna, the first time with a 2 dB advantage to the K9AY and the second time equal on both antennas.
Results from the night of 29/30 January, 2021: Northeast there were a total of 21 transmissions from Europe decoded. Of the 10 captured on both antennas, S/N ranged from 2 to 4 dB better on the K9AY for an average of 2.7 dB. Southwest had a total of 38. 37 were received on both antennas with an average advantage of 0.1 dB to the K9AY. One was decoded only with the LNV.
Results from the night of 30/31 January: Northeast had a total of 8, four being heard with both antennas with S/N favoring the K9AY between 2 and 3 dB with an average of 2.7 dB. Southwest there were 40 in total, 36 being heard on both antennas with an average advantage of 0.4 dB to the K9AY. Two were heard only with the LNV and two only with the K9AY.
These results should be considered in the context of “what can I receive with one antenna that I cannot with the other” rather than “how many dB better is one antenna than the other”. Why? Because of the noise blanker settings I am using for the FST4W modes in WSJT-X. The way I have it set, it will first try to decode without any noise blanking. If that succeeds it will stop there. If not it will next try with a noise blanker setting of 5%. If that succeeds it will stop there. If not it will in turn try 10, 15, and 20% but it will stop at any point if a successful decode is obtained. What this means is that if on a given antenna it is able to decode a signal without using the noise blanker or with a low noise blanker level, it makes no attempt to see if it could get a better signal to noise ratio using more noise blanking. But when decoding on the “weaker” antenna it might get one or more levels deeper into noise blanking before obtaining a decode. This can have the effect of reducing the reported difference in S/N between the two antennas. During these tests I saw many cases where it decoded almost immediately on the K9AY but took longer on the LNV. This suggests on the LNV it was requiring more noise blanking to succeed, and that some of the decodes on that antenna might not have happened at all if I used no noise blanking or only one fixed setting. So if anything, the advantage of the K9AY is likely understated in these tests.
While not formally summarized in the above results, I have been paying attention to apparent front to back when receiving signals off the back of the K9AY. I say apparent because I am not switching the K9AY to the other direction but instead comparing the S/N ratio on the LNV to that of the K9AY. One some nights, apparent front to back is typically 10 to 15 dB with some values in the single digits. Other nights it ranges from single or low double digits to 24 dB or more. I suspect at times it is even more. For example I received a transmission from WH2XND at 0 dB S/N on the LNV but it did not decode at all off the back of the K9AY and could not be seen on any of my waterfalls, fast or slow! That would suggest something on the order of 30 dB difference between the two antennas.
The bottom line is that I am receiving a lot more European DX thanks to the K9AY loop. This antenna is well worth the work and expense that went into it.
Intermittent listening on 630 meters prior to the vactrol modification suggested an even bigger improvement northeast over the LNV on that band, though no formal comparison was made to to lack of a second receiver. On this band there may have been more advantage to the K9AY in the southwest direction but it was hard to tell with just one receiver.
In early 2020 I began phasing out much of the first generation LF equipment and building replacements. My LF operating interests focus largely on DX. As I have learned more about all of this, it became obvious I needed some upgrades. This is the second in a series of posts about new equipment for our lowest frequency amateur radio allocation.
Like the first generation receiver, the transmitting downconverter did not have adequate frequency stability for slow modes on LF. I also wanted something that didn’t tie up my only HF rig when operating on 2200 meters. After reviewing several designs for phasing exciters I settled on a design by W1VD. I built mine Manhattan style using MEPads and MESquares from QRPme.
The MPS6650 and MPS6652 transistors used by W1VD are no longer available. I successfully substituted BC33716BU and BC32716BU devices but I have not been able to achieve the stated +20 dBm output. Mine will only make +16 dBm before the output waveform becomes distorted. This works OK with my amplifier but is a subject I would like to revisit at a later date.
Initially I encountered some difficulty getting good carrier and opposite sideband suppression. I traced the problem to the LO signal to the two mixers not being 90 degrees out of phase. I built several variants of the quadrature hybrid but I could not get accurate 90 degree phase shift or equal amplitude. Trying some alternate approaches, I achieved success using a Wilkinson divider and phase shift network. Some cut and try adjustment of two capacitor values was needed but in the end I achieved accurate 90 degree phase shift with similar amplitude on both ports. I used 6 dB resistive attenuators on the two LO signals before feeding the mixers. The two outputs from this circuit go directly to pin 8 on the two SBL-3 mixers in the exciter. The 6 dB pad, C1, C2, T1, C3, C4 and the associated 49.9 ohm resistor shown in the W1VD exciter schematic were omitted. With this arrangement I was able to achieve better than 55 dB carrier and opposite sideband rejection after careful adjustment of the level and phase balance trimmers in the exciter. If you build this and find it is operating on the wrong sideband, reverse the LO inputs to the mixers. If you look closely at the blue and orange wires coming off the LO divider and phase shift board, you will see they cross over each other on the way to the mixers on the main board below. Mine had ended up being on lower sideband the first time around! One other change should be made to the phasing exciter if you will be operating it into a 50 ohm load: omit the 49.9 ohm resistor in series with the output. The 1 uF capacitor should connect directly to the junction of the two 5.1 ohm resistors.
I am using the same Leo Bodnar GPS Clock that supplies 408000 Hz to the new receiver. It supplies 136000 Hz square wave to the exciter, which I low pass filter before the divider.
I have many hours of operation with this exciter in various modes. It has performed well. One thing this exciter does not like is magnetic fields which can couple 60 Hz energy to the audio circuits. Don’t put it too close to a linear power supply!
In early 2020 I began phasing out much of the first generation LF equipment and building replacements. My LF operating interests focus largely on DX. As I have learned more about all of this, it became obvious I needed some upgrades. This is the first in a series of posts about new equipment for our lowest frequency amateur radio allocation.
After using the original modified SoftRock Lite II receiver for three years, it was time to move on. That first receiver served me very well. With it I was able to make three trans-Atlantic QSOs, and heard a lot of DX on various modes. In the end, however, I wasn’t satisfied with the frequency stability of the crystal oscillator, which was about 1 ppm, or a little less than 0.15 Hz drift on 2200 meters. That may seem completely insignificant to the HF, VHF or microwave operator but for the most serious DX pursuits on LF it not sufficient. With the one watt EIRP legal power limit, propagation and high noise levels at 137 kHz we need very slow modes to succeed over great distances. As a general concept, the slower the mode the greater the frequency stability needed. Legacy modes include QRSS (extremely slow CW meant to be read visually from a waterfall) and its derivatives like DFCW. Readers may recall my first DX QSO with 2E0ILY used DFCW60, meaning that each “dit” or “dah” takes 60 seconds to send! Drift of 0.15 Hz is clearly visible at that speed and can lead to difficulty “reading” signals at even slower speeds. Today we have various slow digital modes for beaconing and QSOs. At the extreme, EbNaut requires transmitter and receiver drift be no more than a few tens of microHertz! Others are more tolerant but current evolution suggests one should strive to stay within 0.01 Hz or better during any 30 minute period if DX is of prime interest.
During those first three years I had tried various receiver, filter and preamp configurations. I now know what is needed with the SoftRock and my available antennas. I wanted to combine the filter, preamp and receiver into one box but I wanted to use a GPS referenced local oscillator for stability. In the end I settled on a design which puts all but the local oscillator into one box. The LO is a separate Leo Bodnar GPS Clock which supplies 408 kHz for the receiver (divided by four in the SoftRock quadrature LO generator) and 136 kHz for a 2200m phasing exciter.
The major building block for the receiver is a SoftRock Lite II kit from Five Dash. A few modifications need to be made for 2200m operation. The schematic shows the values for parts that need to change for operation on this band (C3, C4, C10, C11, C12, L1, T1, R5, R6, R16), as well as the removal of the crystal and external LO connections in its former place. The capacitors can be ceramic. I recommend mounting the SoftRock Lite II board with the insulating hardware that comes with it. Ideally one wants everything isolated from the metal box except for the shield of the audio cable connector. To maintain that one ground point I run the receiver either on a battery or an isolated wall wart.
For the front end stages I have married a filter design by YU1LM and a preamp design by W1VD. The filter provides a bandpass response to keep out of band signals from overwhelming the receiver, while the preamp provides about 20 dB gain which is needed with many small receiving antennas on LF. You want enough gain in the front end and receiver so that the noise floor comes up at least 10 dB when you connect the antenna. If this seems a little different from conventional advice, consider that we are dealing with extremely weak signals where even fractions of a dB can make a difference. If we want to keep the signal to noise ratio from being degraded a meaningful amount, we need that much gain to be sure the SoftRock and sound card noise floor don’t degrade S/N of the system. With the exception of the 10 uF electrolytic, all capacitors are ceramic types.
Next I needed a suitably stable local oscillator. We need a final LO frequency that is close enough to the 2200 meter band to allow tuning it with whatever sound card will be used. If the sound card sample rate is 96 kHz, we need to be within 48 kHz of the receiving frequency. I recommend staying a few kHz less than that due to the way anti-aliasing filters in sound cards work. This means we want our LO to be between about 96 kHz and 178 kHz in practice, preferably avoiding putting it “in band”. The LO frequency is divided by four in the SoftRock quadrature generator circuit. This means we need to inject a frequency four times higher into the receiver. Anything between 384 and 712 kHz will work. I was already using one of the two outputs from the GPS Clock to provide 136000 Hz LO to my phasing exciter. Available frequencies for the second output are somewhat limited and tied to the first frequency but in this case 408000 Hz is one of the options, and it is perfect. That puts our final LO at 102 kHz, comfortably within range, yet far enough removed from the band of interest to put the image frequency around 67 kHz, well down the slope of the receiver front end bandpass filter. Perfect!
First I tried injecting the 408 kHz square wave directly into the SoftRock. It worked but I didn’t have a good feeling about it. For one thing, that meant that the SoftRock and GPS clock grounds were connected, a situation which I was trying to avoid in case of ground loops and noise getting into the system. The GPS Clock also didn’t like the impedance, causing it to put out not only the harmonic rich square wave but also a significant amount of HF energy as ringing due to impedance mismatch. I tried using a transformer (for ground isolation) and low pass filter to clean up harmonics but this made the GPS Clock even less happy with a lot of ringing due to reflections. Since I had signal to spare I solved this, albeit somewhat crudely, but inserting a 10 dB attenuator between the GPS Clock and transformer. This gave a nice clean sine wave at sufficient level into the SoftRock LO circuit. I don’t claim this design to be elegant or perfect, but I do claim it works well for me. I used film capacitors in the filter because I had them on hand, but ceramic should be quite acceptable.
This new receiver has been in operation for several months. Sensitivity and gain is more than adequate for use with my LNV antenna. Frequency stability is now determined almost entirely by sound card sample rate drift and is on the order of 0.01 Hz over several hours. This is sufficient for all but EbNaut, where the sound card sample rate requires continuous monitoring and correction. I have not conquered that yet.
Occasionally I get asked how I go about building circuits on proto boards. This post describes how I built a low frequency band pass filter.
For some circuits, the layout will not be so easy, but for simple filters like this I have a method I almost always follow. Let’s start by looking at the schematic for the filter I will be building.
Now let’s think about that for a moment. We see one side of four capacitors (C2, C3, C5, C6) is connected to circuit ground. Also the shell of the input and output coaxial connectors are connected to circuit ground. If all of those points are connected to ground, then they are all connected to each other. To make this clearer, we can redraw the schematic as follows.
I like to lay out my parts on the board so they physically resemble the schematic representation. If you think of the top of the schematic as north, bottom south, left west, right east as if it were a map (it is a circuit map!), then we can think in terms of components oriented along a north-south line or east-west line. C1, for example, has one of its leads on the west and another east. C2 is a north-south oriented part. For a filter such as this, I start by laying out all the capacitors on the proto board in much the same way they are represented in the schematic, making sure to leave spaces to fit in the inductors later on. As I put each capacitor onto the board, I spread its leads slightly on the back side so the parts won’t fall off the board when I turn it over for soldering. They don’t have to be spread much.
I then flip the board over and just solder each capacitor lead to the pad around it. If using a temperature controlled iron, I suggest about 650F for soldering on these boards. Then I clip off excess lead length. Next, with the the help of the schematic I identify any capacitors that are connected to their neighboring capacitor and make those solder connections on my board. I’ve developed a method of doing this using a solder blob but many builders will prefer to use a short piece of wire soldered across the pads as a jumper. Another method is to fully bend over one or more leads before soldering to the board, so that the component lead itself becomes the jumper. That method is easy, but if it becomes necessary to remove a part later, it can complicate matters. Finally at this stage I connect all the grounds together in a row, just as they are shown in the modified schematic. Again, using solder blobs I have developed a technique to build such circuit paths entirely of solder but a buss wire soldered along the board will probably be easier for most builders. With regard to my solder blobs and building solder rows, it is much easier if the iron is not too hot. 650F is on the warm side. 600 or even a little lower can make it easier to bridge blobs without them separating from neighboring blobs while doing it. This is second nature to me now, but it took a while to develop this skill. It involves getting a decent size blob of molten solder on the tip and then placing it into the gap where you want to form a bridge. Putting the iron in there and then trying to add the solder does not work!
The next step is installing the inductors. In this case, they are toroids. It may appear that the toroid itself is oriented along a north-south line while the schematic shows them east-west, but if you think of how the two leads come off the sides of the toroid, the leads are oriented east-west as are the connection points on the schematic. I hate having my toroids end up loose and wobbly on the board, and spreading leads doesn’t keep them tight against the board while soldering when small gauge wire is used. I have developed a method to help hold those little devils in place and keep them snug against the board while I solder them. It involves clip leads on the end of strings which loop up and over an overhead support with weights on the other end. This puts upward tension on the leads, pulling the toroid snug against the board.
A few words about soldering of the enameled wire may be in order. Life is too short for scraping or sanding the enamel off these very fine wires. I use wire with enamel that can be heat stripped. The heat of a hot soldering iron (I recommend 750F) along with fresh solder and perhaps a bit of liquid flux on the wire (if it is available) will burn the enamel off. The problem is it takes a few seconds and that much heat can cause the copper rings to come off the proto board! I hold the soldering tip against the wire about 1/16 inch above the board, being careful not t let it touch the board. It is necessary to apply a bit of solder to get the heat transfer working well enough but with a bit of practice the insulation can be burned off and the wire tinned without much difficulty. Dabbing a bit of flux (liquid or paste) on the wire before applying heat can be very helpful. If the wire is fluxed, usually just getting a small blob of solder on the hot tip and touching it to the wire for a couple of seconds will get the job done. I admit my first few attempts at this didn’t go so well but I got the hang of it after a bit of practice. Once the lead is properly tinned I can solder it to the pad on the board. At 750F this should be done quickly! Sometimes I lower the heat to 650F before soldering to the board. Once both leads are soldered to the board, I connect them to the adjacent components using my solder blob technique while the overhead support system is still keeping tension on the wires. Once all the soldering around these leads is finished, the alligator clips can be removed and the inductor leads cut short.
In this simple filter build that’s it except for connecting it to the outside world in whatever manner is appropriate for the project at hand.
On the morning of January 15 I was nearing the end of a 72 hour test of the JT9 submodes (JT9-10, JT9-5, JT9-2, JT9-1) on 136.395 kHz. The transmitter had been running 87% duty cycle for two days and as far as I knew all had been well. On this morning I checked in on things when I got up just before sunrise. It was running as expected with the waveforms on the ScopeMatch looking normal. I went about some morning chores and came back about 20 minutes later to check again. The transmitter was still running but the antenna was far off resonance. Minor changes are common but this was more than a minor change. I knew something was very wrong.
I quickly shut down the transmitter, grabbed my binoculars and went to the window to inspect the antenna. All wires were up and intact. I then hastily bundled up and went outside to check the loading coil / variometer. It didn’t take long to realize where the trouble was. When I removed the cover from the assembly housing, acrid smoke came billowing out and I could feel heat radiating from somewhere inside. This was not good! Since the smoke was so thick and presumably toxic, I could not do a full inspection until things had aired out a while.
Upon subsequent inspection I found the bottom of the moving inner coil badly damaged. I can only guess as to what happened. Careful inspection of the following pictures will reveal something of the construction. There was a wire (12 AWG solid, insulated) running down the length of the form on the inside. This provides connection from the bottom of the inner coil to a a terminal at the top of the coil form which is jumpered to the top of the large outer coil. At both ends, the method of feeding through the form was a 18-8 stainless machine screw with washers and nuts as needed. On the inside the ring lug on the wire was between the head of the machine screw and the coil form. Stainless hardware may not have been an optimal choice. It stays clean practically forever but it has poor electrical properties. I had assumed it would be fine with the expected 2 amps or so of low frequency RF current.
What I suspect happened is that over time, probably aided by thermal expansion and contraction cycles of the PVC form, the hardware became loose on that bottom connection. As it began to loosen slightly, resistance of the connections may have increased somewhat, leading to more heat being generated. This may in turn have led to some slight softening of the PVC, allowing pressure on the connections to relax even more. I believe eventually it became so loose there was arcing which produced extreme heat in a localized area, eventually leading to the damage.
In hindsight, there may have been two warning signs that something was not right. If these were signs of failure in progress, things had been going south for some time. About two or three weeks prior to this incident I had noticed that when I was transmitting I would sometimes see “fuzz” appearing on both sides of my signal when viewed on the waterfall of my SDR receiver. It usually lasted only for several seconds, then cleared up. I did wonder about arcing, but the ScopeMatch looked perfectly normal. I put it down to just another artifact of severe receiver overload. It’s not as though my signal ever looked clean in the local receiver! There was always plenty of junk, no doubt worsened by the use of back to back diodes across the receiver front end to prevent damage from my own transmissions. But this particular “fuzz” phenomenon was something I hadn’t recalled seeing previously.
The second possible warning sign came 24 hours prior to discovery of the failure. On that morning resonance suddenly “jumped” higher in frequency. It wasn’t a big change, but was something I hadn’t seen before in benign weather conditions. Re-resonating took care of it but about an hour later it “jumped” back to the original resonance condition and needed to be adjusted again. This unexplained behavior should have been a warning that something was not right.
Much of what I think I know about this failure is speculation based on inspection after the fact. My theory seems further supported by the fact that the other stainless machine screws passing through this form had all loosened considerably. I know they were tight when it was built, but I was able to remove them using just my fingers. I will never know for sure exactly what happened, but the new inner coil will be designed to avoid the suspected failure scenario. If it fails again, I will have to reexamine my theories!
What? Mowable cables? That doesn’t make any sense! Let me explain. Throughout my nearly four decades exploring radio, I have often had occasion to run a “temporary” cable to some antenna. Usually these end up laying on the ground where they quickly become a nuisance, having to be moved every time the grass needs to be cut. This often continues for some time. After all, in a ham radio sense the definition of temporary is “anything expected to be in service for less than the life expectancy of the operator”. About year ago I had a sudden explosion of “temporary” cables. I needed to run coax and a variometer control cable to my new 2200 and 630 meter transmitting antenna, as well as coax to a receiving antenna for those bands in another location. These were put down just after the last lawn mowing of the season, but were at risk of damage from the snowblower as I kept a path cleared to the transmitting antenna during the winter. This summer they have been a constant source of irritation as I had to move them every time I mowed the grass.
Since I still can’t afford good coax and conduit to do this job in a permanent (meaning less irritating) fashion, something had to be done. One obvious solution is to dig a shallow trench and lay the cable in it — with our without burying afterward. This tends to be a lot of work and it’s messy, disturbing the grass (uh, I mean the weeds) and leaving dirt strewn all over. I was looking for a cleaner and, hopefully, easier method. One morning about 2 AM it came to me. I sat bolt upright in bed, sending Boo (the cat, who had been asleep on my chest) fleeing for cover. Who said you had to dig a trench? I have soft, sandy soil. Surely one could press a trench into the ground without the mess. It just might be easier, too. The following series of pictures depict the process, which worked very well.
Step One: Mark a line. Drive in stakes at each end and at any locations along the run where a bend is required. Run string (or small wire) from end to end, then spray paint a line on the ground along it.
Details of the string (wire) and painted line at a bend point.
Step Two: Hammer a slot into the ground. I used an 8″ x 8″ dirt tamper and a 10″ length of 1.6″ OD steel pipe. Lay the pipe on the painted line and hammer it in until its top is flush with the surface of the soil. In my soil this takes two to three blows, and the flat plate of the tamper makes it easy to know when you’ve reached the correct depth. This photo shows the pipe in place before being driven into the soil.
Here is a photo showing results after the pipe has been driven flush with the soil. To continue I simply pull out the pipe and move it forward 9 inches (just a bit less than the length of the pipe), then drive it into the soil again. The process moves along quite quickly.
Step Three: Lay the cable into the trench. I make 15 to 20 feet of trench at a time, then lay cable into it, then do another section of trench.
The completed job. There is no messy strewing of dirt, the paint line has virtually vanished, and the cable can barely be seen if one is not standing very close to it or directly in line with it. The top o of the cable is 3/4″ to 1″ below grade, so it is out of danger from the mower. Of course it is still subject to damage from any number of things, but with temporary cable runs that is usually a fact of life.
Last year I acquired some transverters with the idea of getting back on the VHF and UHF bands. I only have one station transceiver so everything has to work from that. The transceiver’s ANT 1 connection normally goes to the input of my 160-10m amplifier, ANT 2 to the input of my 6m amplifier, and RX ANT IN to a low band receive antenna switching and control unit. For use with a transverter, I need ANT 1 to go to a transverter drive attenuator, the output of which goes to the transverter IF input (transmit), RX ANT IN to the transverter IF output (receive). This requires me to remember to change two switches, and forgetting one during a quick band change can be disastrous. I proved that last year when I forgot a switch and accidentally dumped 1500 watts of RF into the makeshift drive attenuator I was using at the time. Poof! Szzzt! There went the magic smoke, costing me $40 for another hybrid attenuator. The situation gets even more complicated when more than one transverter is involved and the correct one must be selected. Since I have several amplifiers sharing a common high voltage supply it is also important that the correct one (and only the correct one) be enabled for transmitting while all the others be locked into standby. This was a nightmare!
Clearly I needed a better system. What I needed was automation of the process. A band decoder connected to the transceiver band data socket would do no good since that would only support bands that are native to the radio – 160 through 6 meters. Since I always have CAT software running (DXLab Commander) while operating there was another option. I could add a parallel port to my PC and configure it so that Commander would make one of the data pins go high for HF, another for 6 meters, another for 2 meters and so on. I could then build a control unit and add relays to do all the band switching tasks.
The first thing I did was sketch a basic concept diagram so I could better visualize what I needed. I was going to need two regular SPDT coaxial relays; one to route the transceiver’s ANT 1 connection to either the input of the 160-10m amplifier (for HF) or to the transverter drive attenuator (for VHF/UHF), the other to route the transceiver’s RX ANT IN to the low band antenna switch box (for HF) or to one or more transverters (for VHF/UHF). To select the proper transverter I was either going to need a lot of relays in a complex matrix or I was going to need two single input, multiple output matrix relays ready made. I found two of the latter on eBay. Specifications were not available and I have no idea what they were made for, so I took some measurements. At 28 MHz, worst case port to port isolation is 90 dB. That’s good enough! Although I don’t fully trust the accuracy of my return loss measurement, it is at least in the ballpark. The relays measured 29 dB (1.07 VSWR), again plenty good enough). They obviously aren’t designed to handle much power but they don’t need to in this application. There will only be 10 milliwatts (+10 dBm) on the transmit relay.
One of the two transverter IF switching relays
Relay isolation test
Relay return loss test
The next step was to start thinking about control circuit configuration. For HF and 6
meters, the only action to be performed would be to enable one
Concept for switching circuit, HF or 6m amp enable
of the amplifiers. Except for the enable relay which would be added to each amplifier, all other system relays would be de-energized for these bands, thus needed no switching. Out came the pen and paper for a little more design concept drawing. It would be elegant to use opto-isolators to interface the parallel port data lines from the relays to be switched, but that would involve buying a lot of parts. I wanted to use what I had, and I had drawers full of small transistors that could be used as switches in this application. I selected the venerable PN2222 transistor for this task. A look at the data sheet was promising but I wanted to verify that its actual DC current gain (hFE) was adequate for a good hard switching action in this application. The first thing I needed to know was how much current I could safely
Testing PN2222 DC current gain ‘in circuit’
draw from the data lines on my PC’s newly added parallel port – a Rosewill RC-302E PCI-e adaptor. I measured open circuit voltage at 3.30 volts. With a 1k ohm resistor to ground that dropped to 3.18 volts at 3.2 milliamps of current. The minimal voltage drop indicated this should be safe enough and would not damage the RC-302E. Allowing for 0.6 volt drop across the PN2222 base-emitter junction, this would leave me with about 2.6 mA base current (3.18-0.6 equals 2.58 volts across the 1k resistor). Cobbling together a quick and dirty test circuit I found that at 250 mA through the collector-emitter circuit, voltage drop across the PN2222 was less than 0.6 volt. In reality I only need to draw about 40 mA with the relays I plan to use, so this was more than good enough.
Edit: After writing this I devised a safe method to run the FT-2000 at 10 watts when on VHF/UHF. The entire band switching system is software-centric, controlled by DXLab Commander. Since the Yaesu CAT command set includes a method for setting power, I programmed each VHF/UHF band button to set the transceiver to 10 watts output. This is safe since there is no way to “bypass” software control in band switching where VHF/UHF is involved. The only possible glitch is in forgetting to reset power when going to HF, but this simply results in low power operation with no risk of equipment damage. In order to facilitate easy power resetting when going to HF I created an additional “HF” band button in Commander which disables the VHF/UHF system and resets power to 100 watts.
I needed an attenuator for driving VHF/UHF transverters. The goal was to take 100 watts of drive at 26 to 30 MHz down to +10 dBm (10 milliwatts) using whatever junk I could find. My 2 meter transverter uses a 26 MHz IF for 144 MHz, while my other transverters (222, 432, 1296 MHz) use a 28 MHz IF.
First, a few words about why. My Yaesu FT-2000 transceiver does have a low level transverter output. The level is -10 dBm, 20 dB below what I need. It could easily be amplified to reach the correct level, so why would I choose not to use it? The answer is both simple and complicated. I have just the one transceiver which I use from 1.8 to 144 MHz and hope to use for higher bands soon. Band switching all the stuff that needs to change going from HF to VHF or UHF with a transverter gets complex enough that I tend to forget things. I wanted to automate all the band switching tasks (RF routing to correct path, be it an HF amplifier or VHF transverter, enabling the correct amplifier while disabling all others, etc. I can easily do this using DXLab, which is my preferred multi function DXing software suite. DXlab understands transverters, so I can set it up to recognize what band I am on, be it 144, 222, 432 or even 1296 MHz, though the transceiver would be on 28 MHz for all of these. This would greatly simplify logging since the correct frequency would always be auto-filled in the logging software. The one stipulation in order to do all this is that band switching must be done through DXLab Commander in order for it to understand what band I am currently on when using transverters. If I set the band from the radio, Commander has no way of knowing that 28 MHz doesn’t mean I am operating on 10 meters!
Here’s the catch. On the FT-2000, the only way to activate the low level transverter output port is to switch to a special band called ‘AU’. This band is 28 MHz, but behavior is different from 10 meters in that on AU band the PA is disabled and the transverter output enabled. There is no way to do that when the radio is set to the normal 10 meter band or when sending a band/frequency request via CAT command. There is no CAT command for this AU band! It must be selected from the front of the radio, and not by a particularly intuitive process like all the other bands. If I used the transverter output, all my automation for band switching ideas would be out the window. Furthermore there would be confusion as to what band I was operating and I would have to manually edit frequency for each logged QSO. Forget it. That’s not going to happen! Hence my desire to use the high level output on the transceiver. I didn’t want to have to remember to turn down the drive, say to 5 or 10 watts each time I went to VHF or UHF, because I would tend to forget that eventually and the results might be costly. So, I wanted a transverter drive attenuator that would take 100 watts down to 10 milliwatts. That is 40 dB of attenuation.
Before deciding on the attenuator approach, I considered applying a fixed negative voltage to the FT-2000 ALC input to reduce its output to a very low level. I asked about this in two forums frequented by VHFers and was warned that there can be pitfalls. Some radios put out an initial spike of full power even with fixed voltage on the ALC line, which would not be good. Even if that were not the case for my FT-2000, failure of the ALC bias circuit would surely result in ugly consequences. I decided to forget about it and go with the high power attenuator. As always, I am grateful for the advice and elmering I received!
Schematic diagram and parts list for the attenuator
I had some 250 watt, 50 ohm RF load resistors on hand. One of those would make a fine input resistor for a pi network attenuator. I had some 51 ohm, one watt metal film resistors. One of those would do fine for the output. But for 40 dB attenuation, the series resistor in the pi network would have to be 2500 ohms at around 2 watts. I didn’t have something like that and trying to make one out of a series-parallel combination of resistors might add considerable stray capacitance. Ordinarily that might not matter too much at 28 MHz, but when making a 40 dB attenuator, stray capacitance could tend to “bypass” the resistor and cause the attenuation to be too low. However, there is another trick that can be used. The series resistive element can be replaced by a capacitor having reactance equal to the required resistor value at the frequency of interest. That works out to about 2.3 pF in this case. That is not much, but I had some Johanson 5200 0.8 to 10 pF muti turn air trimmers around. If I could keep circuit strays low enough or shield input from output that should work. Using a variable element would allow me to “dial in” the proper amount of attenuation, compensating for circuit strays (as long as they weren’t too great). There is a caveat when using a capacitor for the series element in a pi network attenuator. Attenuation will not be constant over a wide frequency range, because the reactance of the capacitor is frequency dependent. That wasn’t a problem for my intended use, since only a narrow frequency range is involved.
I needed a heat sink that could handle 100 watts intermittent duty. I immediately remembered I had some old repeater parts that might do the trick. Some folks might shoot me for this, but I grabbed a NOS Motorala MICOR UHF base station antenna network. This is a circulator, relay, filter and some other bits on a nice heat sink! I stripped all the rubbish off and there was my heat sink, ready to go. It’s a bit of an irregular shaped thing and has some extraneous holes here and there, but who cares? I was going to hide it behind a rack of equipment anyway. The antenna network also provided a type N female bulkhead connector with a short length of RG-400 coax already connected t it, as well as a BNC female bulkhead connector with a similar RG-400 lead. Wahoo! There were my input and output connections for the attenuator. I clipped them off before tossing the rest of the antenna network in my electronic refuse bin. RG-400 is nice stuff: Teflon dielectric, double silver plated braid, stranded silver plated center conductor. You can’t melt this stuff with soldering heat! All the better. A little more digging turned up a small cast aluminum box which I could use to house the attenuator components.
Inside view of attenuator with cover removed
I exercised some care in circuit layout and lead dress. I also left the shield on input and output coax as close to the end as possible in the hope that this might eliminate any need for a shield between input and output. After putting the circuit together I checked it on a spectrum analyzer / tracking generator. To my delight I found that using the trimmer I could vary the attenuation from 27 to 51 dB at 28 MHz. Wow! My circuit layout and construction was good enough. Flatness of attenuation over the 26 to 30 MHz range was within 1.5 dB. That is fine. In practice it will only be used over a 200-300 kHz range with any given transverter, and each transverter has its own built in adjustable input attenuator to fine tune its drive level. Attenuation slope over a 300 kHz range is too little for me to measure but probably about 0.1 dB. Return loss (input SWR) is better than my ability to measure, which is limited to about 30 dB RL (1.07 SWR). Plenty good enough.
One final note. I stripped the paint off the surface of the box that mates with the heat sink and from around the hole where the BNC connector is. Was this necessary? I don’t know but my standard operating procedure for RF circuits is to remove paint between mating surfaces in the enclosure or where connectors attach. I find it easier to do this in the first place than to disassemble something and strip paint after finding there was a problem!