Monday, September 15, 2014

Snippets in Kate 5

Recently I spent some time to port and clean up the Snippets plugin and the underlying template interface for Kate 5.  It's now fully working again and more powerful than ever. The template code was originally written by Joseph Wenniger and most of what I show here is still working like originally implemented by him. Still, there were some improvements I would like to show; also, I'm sure many readers might not be aware of this great feature at all.

Classical snippets use case: insert a for loop witout having to type the iterator variable three times.
The template interface, which is part of the long-time stable KTextEditor API, was heavily cleaned up and now just consists of a single function
    bool insertTemplate(const KTextEditor::Cursor& insertPosition,
                        const QString& templateString,
                        const QString& script = QString());
which inserts a template into a view at the given position. It's very easy to use and still powerful -- if you write an application which uses KTextEditor, it might be worth to spend a moment thinking about how you might be able to make use of it.
I also heavily refactored the implementation of the interface. More than 1000 lines of code were removed while effectively enhancing functionality. 

Core functionality changes

I changed the language of the snippets a bit to make it more clear and easy to use. In the following, I want to give a short overview of how it works now.

The heart of the templates (or snippets) are editable fields (shown in green). They are created in the template string by writing ${fieldname}. They can have a default value, which can be any JavaScript expression. Pressing Tab jumps between the fields of a template. Whenever such a field is changed, all so-called dependent fields are updated. Those can simply be mirror fields (created by having a second field with the same name), or can do something which depends on the contents of the other fields in the template, such as perform replacements or concatenations. Again, you can have arbitrary JavaScript expressions doing that.
An example snippet (not very useful in practice) which has three editable fields (find, replace and sample_text) with a default value for each. Changing the values will update the result in the red "dependent" field in real-time.
Noticeable improvements over the previous functionality (from KDE 4 times) is that you can have fields with arbitrarily complicated default values which are still editable, and that the dependent fields can use all other fields as input (not just one like in KDE 4). It is now also possible to have inline JavaScript doing simple operations in the template.

The Shortcuts feature for the snippets now actually works in Kate.

Snippets now also have proper undo; in KDE 4, only a single character typed could be undone at once while editing a snippet. Now, undo grouping works like it always does.

User interface improvements

For easy testing of your snippets, the "Edit Snippet" dialog has a "Test snippet" button now, which lets you test your snippet on-the-fly.
The user interface was simplified by removing unneeded options, and an inline quick-help feature was added which introduces the user to the most important features of the snippet language. Just click the "More" button.
Inline documentation on how snippets work

An example: C++ Header guards

As an example for how this feature works, let's look at how to create a snippet to generate a C++ header guard. First, create a repository for your C++ snippets:
Open the Snippets toolview and click "Add Repository".
Then, enter a name and specify that you want this only for C++ files:
Create your new repository.
Then, add a snippet:
Add a snippet. Easy.

You can retrieve the document's file name from the editor, make it upper-case and replace dots by underscores automatically to get a nice header-guard-suitable format by using code like this:
Example code for how you can create C++ header guards fully automatically.
If you do not want the guard field to be editable, just create a function which does the upper(fileName...) stuff, and have three fields which call the function (like ${func()}) instead of the two mirror fields and one default-valued editable field. If you do that, the template handler will immediately exit and not present any editable fields.
The ${cursor} variable can be used to place the cursor after all fields were filled. When you type something there, the handler will exit.

Click Ok. Now, to use your snippet, either press the shortcut you defined (if any), click it in the snippets toolview, or use code completion:
Snippets appear in code completion.
Result after executing our new header guard script. A sensible default value was selected automatically. Pressing Escape or Alt+Enter will exit the template handler and place the cursor at the point marked with ${cursor} in the template.
That should hopefully equip you with most of the knowledge you need to write your own snippets. If you like, you can use the full kate scripting API to write snippet code -- it for example allows you to retrieve the text in the current selection and similar useful things.

Some more examples on what you can do

Here's a few snippets demonstrating the features of the engine while partly being of debatable practical relevance. I'm sure you can come up with better use cases for some of those things though.
Write a clean regular expression in a comment and have the snippet mirror it with added extra-backslashes and removed spaces in a QRegularExpression variable. Makes regular expressions even more write-only than they already are.
Get the file encoding from the editor and use it as the coding of a python file header.

Some base64 in the selection ...

... decoded by a snippet which takes the selection and inserts the base64-decoded result.

Next steps

My next step will be to make this plugin loadable in KDevelop as well -- which should be quite easily possible due to the awesome work done in kate to make the plugin infrastructure more generic. If you have further ideas on how to improve the snippets, let me know :)

Saturday, August 30, 2014

1420.4 MHz Hydrogen line: There it is!

With a reasonably simple setup, I finally succeeded in detecting the 1420.4 MHz galactic hydrogen hyperfine structure line:
One of the first successful measurements. The little bulge in the center of the image is the hydrogen line. The sharp, high peaks are man-made interference and not part of the astronomical signal. The yellow line is with the telescope pointed towards the milky way, the pink line is for a different location far off (the bulge is gone here, as should be). The different base levels are probably mostly caused by different elevations leading to different amounts raditation from the ground reaching the antenna.

Setup

The setup consists of a 1.2m dish with a feed and two homemade 1.4 GHz low-noise preamplifiers. The amplifiers have about 19dB of gain each at 1.42 GHz and a noise figure too small to measure for my equipment (certainly below 2dB I would claim). The simulated noise figure is about 0.4dB, but that is without taking the Q factor of the matching components into account. A spectrum analyzer (Rigol DSA 815) is used as the detector for now.
Preamplifier made by hand: traces are cut in a copper-coated epoxy board and components are soldered onto the board (linear voltage regulator as example). For small boards with low component count I found this technique to be significantly simpler than toner-transfer etching -- and no less effective.
Dish with mounted feed (for the latter, see text and pictures below)
Since you obviously cannot simply attach a coaxial cable to a parabolic dish reflector, a so-called feed is required to absorb the radiation collected by the dish reflector. I adapted a feed design commonly used for wireless LAN for the HI frequency (which is done by just multiplying all lengths by the ratio of the two frequencies). This design consists of a biquad antenna over a ground plane with two reflectors on the side (those are to make the radiation pattern more symmetric and fit the dish better). It provides an excellent match at the design frequency and seems to work well.
Biquad feed. A piece of wire forming two squares is placed above a ground plane. The coaxial cable is attached at the back side and is connected to the wire. The feed can be moved back and forth to focus. Theoretically. If I had a criterion for when it is in focus. (Seriously though, I moved it towards the dish a bit and saw the spillover decrease (less unwanted radiation from the ground reaching the feed, i.e. base signal level drops) thus I picked a point roughly where I saw no further significant decrease in spillover)

Biquad feed and the two amplifiers from the back side. The preamplifier is connected directly to the feed with a SMA connector. Note the paper towel roll wrapped in tape which is hot-glued to the feed for mechanical mounting (hey, it works!)

S11 of the biquad feed shown above. After a few adjustments, it provides an excellent match of more than 18dB return loss at the design frequency. That number means that less than two percent (10^(-18.21/10)) of the signal are lost due to feed mismatch.
For the more serious measurements I conducted, the detector (digital spectrum analyzer, gray box below -- the thing which makes the black screenshots with the yellow curves in them) is controlled by a computer. It records 4 sweeps with 30 seconds duration each and about 1 MHz bandwidth and averages them; the computer fetches the result and stores it on disk. The camping mat is metallic and serves as interference protection (yes, it actually works ... a bit).
Metallic camping mats designed to isolate the user from low temperatures can also be used as faraday cages -- with moderate success.
The complete setup. With chairs to prevent people from tripping over the wires. Left: Parabolic dish antenna with feed; Center: Laboratory power supply to power the amplifiers; right: Notebook and spectrum analyzer for recording the data.

Results

With this recording technique, I was able to take some nice data sets. In all of them, the antenna points into a fixed direction of the sky and the sky moves across the picture. The x (right) axis is frequency; the y axis is time in the upper graph, and intensity in the lower one. Intensity is encoded as color in the upper graphs. Thus, if you look in the upper graph from top to bottom, you actually see different regions of the sky moving across the antenna. In the lower graph, you see the intensity of radiation of all those regions added together (the lower graph is the upper one integrated along the top-to-bottom axis, so to speak). All intensity scales are logarithmic.
Started quite late in the night, with about 80° elevation (almost straight up); different parts (roughly Cyg, Cas, Perseus) of the milky way move through the picture.
Unfortunuately, my aiming is quite inaccurate and the antenna beam size is 10 degrees, so I can only very roughly tell what is currently visible in the picture. Still, in the picture above, you can clearly see three wide peaks which are HI emission, and three narrow peaks which are interference. We can remove the interference to get a nicer picture:
Same picture as above after interference removal algorithm.
This works by manually flagging the locations of the interference peaks, and then fitting a curve of the form a*x^2+b*x+c+d*exp(-f*(x-e)^2) to those locations (second-order polynomial for the "real", wide spectrum and a gaussian-shaped peak for the interference).
Small section of the spectrum shown above with interference peak (center). The blue line is data, the green line is a fit of a*x^2+b*x+c+d*exp(-f*(x-e)^2) to the data.
The d*exp(-f*(x-e)^2) term is then simply subtracted from the data, which leaves only the baseline, without the interference peak. This method even partially "recovers" data hidden by the interference peak (how reliable that information is is a different question, but in this case it looks fine).
After this treatment, you can clearly see three wide bulges in the curve.
You might have noticed that the x axis is not labeled with frequency, but velocity. This is because the actual emission of the kind of radiation observed here only happens at one specific frequency (1.4204 GHz -- where the 0 in the graph is). Still, we see it at different frequencies because of doppler shift. Thus, the observed frequency of the radiation translates directly to the velocity with which the matter which emits the radiation moves towards us -- or away from us.
I would carefully claim that the three observed peaks originate from three different spiral arms of the milky way which rotate with different velocities. I'm not sure if that is accurate though.

With the same method, I took several more similar pictures of different regions of the milky way, some shown below.
Transit of (roughly) Perseus through the beam.
Transit of (again, roughly) Sagittarius / Aquila through the beam. Note that -- different from the pictures above -- all the matter observed here moves towards (positive velocity) us, not away from us.

Future plans

My long-term plan is to attempt creating a survey of a significant part of the sky -- basically a map which tells how much HI radiation at what doppler shift is observed at which point of the sky. For that, I need two things: A reliable way to determine the antenna position; and a consistent way to compare signal amplitudes.
For the former, I'm currently trying to build a tilt-compensated compass with elevation sensor (basically a three-axis magnetic field sensor and a three-axis acceleration sensor with software). It's working a bit, but not really.
For the latter, in professional radio astronomy, one tool which is used is a noise diode. That is a small device which injects noise into the receiver system at the very start of the signal path (in my case, it's a small device inside the biquad feed). It is periodically switched on and off and adds a constant offset to the amplitude of the observed signal. The trick is that this constant offset is reliably constant, over long periods of time. When we see it change in the recorded data (and we will), we can be fairly sure this change is caused by the receiver system (amplifier, detector) changing -- for example because of temperature drift. Thus, by dividing the observed amplitude of this constant offset, signals amplitudes can be evaluted even if the receiver and detector system drifts.
I built a zener-diode based noise generator which seems to work fine for my purpose. It can be switched on and off using a transistor and the Raspberry Pi GPIOs, and is powered by battery to get as little fluctuation as possible in the noise signal itself. The generator is attached to a small piece of wire as an antenna and is glued inside the feed.
Noise diode test. Pink curve: noise diode powered on; yellow curve: noise diode off. This is without the feed and amplifiers, but it looks similar with them.
Noise diode test with actual sky data. Unfortunately, no real astronomical signal visible this time :( The upper graph is as explained above (x-axis frequency, y-axis time), the lower graph has time on the x-axis and integrated intensity on the y-axis. The spikes in the lower graph and the stripe pattern in the upper graph is the switching of the noise diode. Performing a calibration would bring all the spikes to an equal height, and then remove the spikes.

Caveats of using a spectrum analyzer as a detector for radio astronomy

It took me a while to figure out how to best configure the spectrum analyzer for this purpose. One thing which is easy to overlook (because it is nearly irrelevant in most applications) is that a spectrum analyzer does sweeping measurements, i.e. it measures intensity at one frequency span at a time, then goes to the next etc. This means the resolution bandwith (RBW), which basically controls the spectral resolution of the analyzer, also controls how much signal power is detected at once during the sweep. If it is set to 100 Hz, the analyzer will walk through the whole frequency span in 100 Hz chunks, detecting only 100 Hz of the spectrum's power at once. If set to 10 kHz, it will detect a hundred times as much power at once -- which gives an effective signal-to-noise ratio which is a hundred times better in the end! This is very unintuitive because the noise level displayed by the analyzer actually increases for higher RBW values (which makes sense of course if you think about it: if you accumulate a larger part of the spectrum into a single channel, that channel will have more noise power overall). Thus, when using a spectrum analyzer for this purpose, you have a trade-off between S/N and spectral resolution (you always have a trade-off between S/N and spectral resolution, but in this case it's far more grave than usual -- exponent 3/2 instead of 1/2 [as usual] for the channel count if I'm not mistaken). I selected 10 kHz spectral resolution, more resolution (lower RBW) certainly makes no sense for this kind of signal. Probably 30 kHz would be fine as well -- but that makes interference detection and removal harder again because the narrow interference peaks are quite smeared out.
This also means a spectrum analyzer is not a very good detector for this kind of telecope -- with 10 kHz RBW and 1 MHz bandwidth (about what I used above) 99% of the signal power are lost simply because they are not detected at a given time.

Conclusion

It is very nice to finally see some results come out of this project. I'm looking forward to improving the receiver system (eventually I want to replace the analyzer by an A/D converter + mixer) and the calibration process.






Wednesday, July 23, 2014

Project: Detecting the 1420.4MHz hydrogen line -- status report

A while ago, I decided it would be a fun thing to try to detect the 21cm hydrogen line. The 21cm line is a hyperfine structure line of hydrogen; the latter is abundant in interstellar matter in our galaxy (and in other galaxies too). That makes this transition's radiation an interesting object to study, especially because you can determine the velocity of the regions emitting the raditation quite precisely by looking at the doppler shift of the radiation. That makes it possible to construct for example rotation profiles of galaxies. You can do that even if your spatial resolution is low (which is important, since spatial resolution is limited by the size of your antenna: as a rule of thumb, the area you see as one big smeared "pixel" is about the inverse of the size of your antenna expressed in wavelengths taken as radians -- so if your antenna is 1.05m big, which is 5 wavelengths at 21cm, that gives you a spatial resolution of about 1/5rad which is about 11 degrees. That's about 22 sun diameters, which is really bad if you wanted to make an image of the sky).

An antenna

First thing you need is an antenna. For this purpose, I built a 14-element Yagi antenna, as seen below.
14-element 1.4GHz Yagi antenna
Closeup of the amplifier next to the resonator of the antenna. The circuitry is hidden below the copper shielding.
It has quite a lot of tape on it -- I underestimated the size and need for mechanical stability a bit. For the next try, I will definitely use a more sturdy piece of wood as the base. Anyways, first measurements indicate that it should work well enough; not great, but good enough for a first test.
S11 of the yagi antenna shown above. x is frequency, y is log reflected power (lower is better).

A Preamplifier

The second thing in the signal path is a preamplifier, which is usually placed directly at the antenna and has the purpose of amplifying the received signal before any degradation can happen (e.g. through cables, which weaken the signal and add noise). The noise figure of the preamplifier (which is often called LNA, for low-noise amplifier) is important; it tells how much additional noise will be present in the detected signal. The noise figure of all subsequent amplifiers is usually not relevant: they amplify the noise added by the LNA, which is far worse than their own added noise. Thus, it makes sense to carefully design the very first amplifier, while the rest of them can be as cheap and simple as possible.
It is very difficult to design a good LNA though; so for some first tests, I use a generic wideband amplifier IC as my preamplifier, which has a not-very-good noise figure of 2dB. I hope it will work as a proof-of-concept; I plan to add a proper LNA later then.

More amplifiers

The next thing you want to do is to somehow amplify the received signal so much that its quality is limited by thermal noise: as soon as you reach that point, further amplification buys you nothing, you need to improve the antenna or cool your LNA. How much amplification that is exactly will depend on the circumstances, but since my measurement device can detect power levels down to roughly -120 dBm/Hz without problems and thermal noise from the antenna will probably be somewhere around -180 dBm/Hz, I will need somewhere around +60dB amplification at least -- better a bit more. That's about a factor one million more power.

Wide-band gain blocks are not a solution for every problem :(

My first attempt to achieve this was to use several more wideband amplifier blocks -- since they're cheap (~3€) and incredibly easy to use (connect signal in, signal out, add 5V power, done). That wasn't a very good idea for two reasons though:
First, besides the noise figure, another not-so-obvious disadvantage of a wideband amplifier is that it is, well, wideband: it will amplify everything, especially for example the very bright GSM (mobile phone stuff) band around 937 MHz. That is bad, because such a strong signal can cause intermodulation products which affect the quality of the signal you actually want to detect, or even cause the amplifiers to go into saturation.
Second, it is very easy to turn broadband amplifiers into oscillators. You have to strictly separate them from each other and avoid any kind of feedback, or they will start spitting out large power levels at seemingly random frequencies.
Both problems seem easy to solve by building a bandpass filter -- but I found it quite hard to build a good bandpass filter for 1.42GHz. It is a frequency where lumped-element filters (those made out of capacitors and inductors) are not really viable any more since you need capacitors with incredibly small values, while distributed-element (microstrip, see this blog post) filters are not that great either because a half- or quarter wavelength of 21cm is still pretty large.

A new experiment: Frequency Mixers

All those resaons taken together brought me away from using this simplest possible solution, so I decided to try what all the cool people do: use a frequency mixer to bring the signal down to a more manageable frequency after the preamplifier, then amplify and filter it at that low frequency. A frequency mixer is effectively a device which shifts the spectrum of a signal along the frequency axis: Mixing a 1GHz sine with a 995MHz sine will result in signals at the difference (and the sum) of those frequencies, so a 5MHz sine (and a 1995MHz sine, but that can easily be filtered away -- if you don't want it). Signal components which are shifted below 0Hz appear mirrored at the appropriate positive frequency (so, if a signal would be shifted to -3MHz, it will appear at 3MHz instead -- with a phase shift, but that is barely relevant).
Mixers are very versatile devices which are used basically everywhere where radio frequency signals are present -- for example in WLAN, in GSM, radio, or satellite television (in the latter, there's actually a mixer in the LNA thing which you put in front of your satellite dish which shifts the received ~10GHz signal to somewhere around 2GHz to reduce losses in the cable to your TV receiver).
Professional radio astronomy applications tend to use more than one mixer stage to lower the frequency gradually (that has various advantages mostly related to filtering), but I hope I can get away with just one stage for this project.

I thus built a test board based on the LT5560 mixer (~3€) which is designed to mix a 1420 MHz signal down to 20 MHz using a 1400 MHz local oscillator (LO, that's the name for the frequency you're mixing the signal you're interested in with). As the LO, I use a programmable frequency synthesizer I built a while ago which is based on the ADF4350 (~8€, but wow that thing is difficult to put on a working board -- took me three attempts with a new board design each).
ADF4350-based frequency synthesizer (left) controlled by a raspberry pi (right). This construction is used as the Local Oscillator (LO) for the mixer. A 1399.5 MHz signal is produced by this circuit on the coaxial cable on the left.

Balanced and Unbalanced signals

Apart from input matching, which is not very difficult to do since it is described in detail in the data sheet of the part, the one difficult thing which needs to be done when using a mixer of this type is that it requires balanced inputs, while the signal on your coax cable is usually unbalanced. The difference between those two is basically that the unbalanced signal has two voltage levels -- ground and the signal, while balanced has three, plus the signal, minus the signal, and ground. A device which converts one of those into the other is called a balun (for balanced-unbalanced). They always work both ways (balanced - unbalanced or unbalanced - balanced). A balun is usually made in one of two ways:
  • transformer type: the unbalanced signal is passed into a transformer and by clever choice of taps on the load side you can get a balanced signal
  • delay line type: transmission lines of certain fractions of the wavelength of the expected signals (e.g. 1/4 and 3/4) are connected to the balanaced signal, and tap different phases of the wave form because of their different length, which allows to extract a balanced signal
The former only works for low-ish frequencies (I don't know exactly how high you can go while they're still usable, but it will be somewhere in the few-hundred-MHz area, at least for handmade ones), the second one works for high frequencies as well but is very narrow-band (it only works for one specific kind of signal). In my test board, both are used: a delay-line type balun is used to convert the unbalanced input from the LNA into a balanced signal, and a transformer-type balun is used to convert the balanced mixer output into an unbalanced signal for the next amplifier stage.
I will not go into details on how to build those baluns here, but good documents describing how to do it include this, this (german), this and this (ready-made components are available and are not even very expensive, but I did not find any Europe-based distributor who actually sells them. And I don't want to pay $20 shipping fees and wait a week each time I need a part).
I checked the performance of the self-made baluns with a spectrum analyzer and directional coupler, and they seem to work well after a few experiments -- although I am not entirely happy with the performance of the transformer-type one: a return loss of -13dB means about 1/20 of the signal power will get reflected back into the mixer, which is more than I would like. I think one reason for this is that the windings of the wire are a bit chaotic which makes the coupling between the windings a bit non-deterministic. Still -- all good enough for the prototype ;)
The S11 curve of the delay-line one is much sharper than the one shown here, and it also has a better return loss at the design frequency.
S11 (return loss) of the transformer-type balun; x axis is frequency, y axis is log power
S11 of the delay line type balun. The resonance frequency around 1.4 GHz is easy to spot. The ripple at lower frequencies is probably at least partly an artifact of the measurement method. x axis is frequency, y axis is log power

But most important, it actually works:
Balanced signal created from an unbalanced input with the transformer balun. The yellow and teal curve show the plus and minus components of the balanced signal, the violet one is their difference, which resembles the original unbalanced signal. Input level is 0dBm.
A balun can also have an impedance transformation ratio, which in this case plays an important role in impedance matching. The transformer balun I built is a 16:1 balun, which takes a 800Ω unbalanced signal and turns it into a 50Ω balanced one -- very convenient for this mixer circuit here, since 800Ω is reasonably close to the mixer's output impedance (as detailed in the data sheet) at that frequency.

Results

LT5560 active mixer test board. Yes, it looks quite horrible because lots of repairs were needed after the initial fabrication ;) Next one will look nicer.

And after fixing several grave issues with the LT5560 test board (including components we forgot to solder onto the board, an accidental short-circuit of the power line which involved overlooking that the baluns both pass-through DC current, the lack of enough ground vias across the board, missing DC paths for the input pins, and an oscillating circuit involving the power supply (which was fixed by adding some 10Ω resistors in series with the RF chokes used) and a few more) it actually works! Look:
Output of the mixer circuit for 1.3995GHz LO frequency and 1.42 GHz input frequency. Upper panel is time domain (time -- voltage), lower panel is frequency domain (frequency -- power). A strong tone is present at the difference of the two frequencies (20.5 MHz, where the cursor is).
Mixer output for 1.3995GHz LO and 1.405 GHz input. If the input frequency is lowered to and then below the LO frequency, the frequency peak moves to the left until it hits zero frequency and then starts moving right again.
There's some fairly strong "noise" on the output which seems to be somewhere around 400 MHz. I'm not sure where that comes from (higher-order intermodulation product?) but it will be easy to remove through a low-pass filter.

Next steps

Now, most of the difficult components for my 1.42GHz receiver should hopefully be in place. I plan to build a 20 MHz amplifier and lowpass filter shortly; I hope to be able to present a few more concrete results here shortly!

Monday, March 17, 2014

kdevelop-python for Python 3: first stable version (1.6.0) released!

Yesterday, Python 3.4 was finally released, so I'm now happy to announce the first stable release of kdevelop-python which supports Python 3! See below for the tarballs.
As in the Python 2 series, PyQt continues to be one of the best supported frameworks.
Obsolete,  please use 1.6.1-py3 (see below) -- 1.6.0 didn't build on some systems kdev-python version 1.6.0-py3
http://download.kde.org/stable/kdevelop/kdev-python/1.6.0/src/kdev-python-v1.6.0-py3.tar.xz.mirrorlist
SHA256:974178fa00a34c5e2a4d9f6408c7fcbf92e7933182dd59216a11c1452238ceb7

kdev-python version 1.6.1-py3
http://download.kde.org/stable/kdevelop/kdev-python/1.6.1/src/kdev-python-v1.6.1-py3.tar.xz.mirrorlist
SHA256: 26b1fa25e8f24f1e0b801ece02b283a750e77543e6df1e571dd52b36778859a5


The kdev-python 1.6-py3 series is compatible with KDevelop 4.6 (kdevplatform 1.6) and is suitable for working with Python 3.x source code.
If you're only interested in using (as opposed to packaging or developing) kdev-python, you should consider installing kdev-python from your distribution's package manager instead of downloading the source code. 
The python 3 and python 2 versions cannot be installed at the same time currently!
There's not that much more to say than what was already said in the beta announcement, so I will just post some screenshots of what continues to work in the Python 3 version:
Code completion is powerful as ever and tries very hard to only make suggestions which are useful in the current context.
Code tooltips are still there, too.
As always, please report any bugs you might find to the bug tracker. Happy hacking!

Tuesday, March 4, 2014

kdevelop-python for python 3: beta release

Good news: Python 3.4 is about to be released, and with it kdevelop-python's first version to support Python 3. Until that happens in a few days, here's a beta:
kdev-python version 1.5.80-py3
http://download.kde.org/unstable/kdevelop/kdev-python/1.5.80/src/kdev-python-1.5.80-py3.tar.xz.mirrorlist
SHA256:99ca1ce97e2a7e553051be7505c17a921ab1aaf318999826ea285f771bcc538a

The kdev-python 1.6-py3 series is compatible with KDevelop 4.6 (kdevplatform 1.6) and is suitable for working with Python 3.x source code.
If you're only interested in using (as opposed to packaging or developing) kdev-python, you should consider installing kdev-python from your distribution's package manager instead of downloading the source code.
There are a few things which need to be announced about this, so read on!

Embedded Python fork

First and foremost, this abomination is finally gone in the -py3 series. Python 3.4 merged a patch which is required by kdev-python, and now we can use the system's python installation. This will especially make kdev-python comply with all distribution's requirements for packaging software, which means it will hopefully soon be available in all major distribution's package repositories.
This also means that kdev-python now depends on Python =3.4.

Python 2 compatibility

The py3 series is not compatible with Python 2, and currently you can not install kdev-python and kdev-python3 side-by-side. I will try to address this restriction in the future and I will also publish a script to install the two versions into separate environments, but for now that's what is like. Especially, for packagers, this means that kdev-python must conflict with kdev-python3. The problem cannot be solved by renaming all the files, it also requires new UI and glue code to be written to select the correct language version, which I only have done partially as of today.

Branch names in kdev-python.git

I also used this opportunity to reorganize the branches in kdev-python.git a bit:
  • python3 is now python3-legacy (do not use)
  • python3-nofork is now python3; this branch is the most recent (unstable) version of the python3 version of the plugin, and compiles against kdevplatform master
  • there's a 1.6-py3 branch which is like 1.6 but for python 3.
If you had any of the renamed branches checked out, you might need to do git reset --hard origin/branchname. In the near future, master will also be renamed to python2 and python3 will be renamed to master (but not yet).

Feature comparison of kdev-python3 and kdev-python2

Generally, the Python 3 version has all the features from Python 2 plus a few more and some bug fixes. Not all new Python 3 features are understood yet (the syntax is supported, but the semantics isn't, e.g. nonlocal does nothing), though. What is there however is support for function annotations:
If the expressions in a function annotation represent a type (i.e. not an instance or something different) they are used to adjust the function's return and argument types.
I really hope this feature gets used for type hints, so this is a first step to encourage you to use it for that. ;)
Not all is awesome just yet, though: Although I'm not aware of any major issues, there will be regressions (things which were working in Python 2 but are now broken). There are always regressions, even with the ~90% test coverage kdev-python has. That's the purpose of this beta: go forth and test, and report all the bugs to the tracker so they can be slain!

Friday, February 7, 2014

kate: intelligent code completion for all languages!

... well, maybe that's a bit of an exaggeration, but it's certainly much more intelligent than before. Look:

Code completion in CSS
... bash
... Lua
... PHP
even Gnuplot!
Note how this one has a different set of possible items for the same query, respecting the context.
Even Mathematica ;) This image shows a problem which still needs to be fixed: in case-insensitive languages, all completion suggestions are lowercased (which is not technically wrong of course, but a  bit ugly). It's easy to fix but simply not done yet.
There's unexpected profit from this in quite some areas, even KDevelop: for example, through this we now get code completion for all keywords in doxygen comments:
Completion for doxygen keywords inside a doxygen comment
Of course, those only appear inside actual doxygen, and not C++. When the cursor is in C++ code, it shows the C++ keywords instead (but they will not be very visible in KDevelop, since they're sorted below KDevelop's suggestions, which are better).

How does this work?

Short answer: magic! Correct answer: it uses the highlighting files. For highlighting, kate has a list of possible keywords for languages listed in highlighting files (/usr/share/apps/katepart/snytax/$language,xml). Those keywords are even context-sensitive: you will notice that e.g. the PHP highlighter does not highlight PHP function names inside comments or strings. So, the highlighting engine needs to know which keywords are valid at which position. Those are precisely the keywords which are suggested in the list.

What now?

Now that we have this feature, I think we can make more out of it in quite some cases. Especially, I want to invite you to have a look at your favourite language, and make sure all keywords / builtin functions / etc. are actually listed. Because of this feature, it might make sense to list keywords for languages where they are not terribly helpful for highlighting; a prominent example would be HTML, where currently the highlighter is totally generic and does not actually look at e.g. the tag names (thus, there's no completion). If you'd fix that by actually listing all valid HTML tag names, you'd (1) get better highlighting, e.g. you can mark undefined elements (think typos) as errors and (2) completion for free with that.

Another thing which can be improved is the context sensitivity. Some languages already do this rather well, but many languages will higlight keywords also in places where it'd be easy to detect that the keyword does not make sense there. That doesn't matter that much for highlighting only, because generally users write code which makes sense, but still -- if you can detect it, both consumers of the highlighting data (the actual highlighting, and the completion engine) gain something from it. So, extra motivation for making things more exact! ;)

I'm sure we can do more cool stuff with this. If you can come up with a good idea -- tell me, I'm happy to talk about it.