I sometimes look at the intervals table of my workouts of my favourite run log http://runningahead.com. It's missing a speed column though, which I sometimes prefer over a pace. This little script adds it :)
I've only run it so far in
- Firefox (tested with Greasemonkey)
Read the entire post to view the source.
I was looking for a web-based project that allows one to do CRUD operations in an easy way, without needing a complex GUI such as phpMyAdmin or adminer. A very limited amount of good solutions were found, so I set out to create my own project: PHP MySQL Explorer (PME for short?)
Rename the file
index.php to something else (or just keep it that way), add a configuration file, which has the same name but the suffix
.inc.php instead of
.php, and add the following minimal data to the file
1 2 3 4 5 6
<?php $host='localhost'; // where to connect to -- usually something like localhost $db=''; // MySQL database name $user=''; // MySQL username $pw=''; // MySQL password for the user
Now you can just browse, edit, update and delete rows on all tables. If the database is designed with foreign keys, it will take those into account and create links where necessary.
- CRUD operations on tables in a basic but function GUI
- Written in PHP *(dunno if that's a feature?) using PDO
- Automatically creates links using the foreign key constraints (hence the database must be a InnoDB, and not a MyISAM)
- Additional user system
- Permissions for users for any of the CRUD operations on any table using a simple rule system
- Configurable (and includes a configuration generator)
You can try it out on the demo page, with username demo and password demo.
On this server I host several git repositories, and sometimes I want to add update a single file in a large repository. It would be quite convenient if I could just update the bare repository without doing a checkout (it saves on space, no trouble with outdated repositories etc)
Obviously, someone on the internet already had asked the same question. I just put it in a nice script, with a simple test to ensure it works :)
You can download the script called git-bare-add, and view the source of the script below.
Usage is quite simple:
git bare-add [-b $BRANCH] $BARE-REPO $REL-PATH $NEW-ABS-FILE $COMMIT-MSG
$BARE-REPO is the path to the bare repository,
$REL-PATH the path to the file inside the repository that should be updated.
$NEW-ABS-FILE is the absolute filepath to the file that should be copied to
$REL-PATH inside the repository.
$COMMIT-MSG is (obviously) the commit message.
One can also edit a file in place, by using the option
--edit and dropping the
git bare-add --edit [-b $BRANCH] $BARE-REPO $REL-PATH $COMMIT-MSG
To perform some tests:
git bare-add --test
It should be relatively safe to use this script, since it makes use of git porcelain stuff.
Note that there is no safety regarding multiple simultaneous accesses. Thus it is best used on a repository where you are for sure the only writer (during the whole length of the edit), since I think it will ignore commits made while edits are being made.
Wilkinson-Rogers to code convertor. Look at wilkinson2formula.html for more info!
Some web pages (e.g. reddit) aggregate a lot of useful youtube links, but provide no way of playing them all, without adding them manually in a playlist.
That is where my next little project comes in. You give it an URL, it looks for all the URLs that look like they point to a youtube video, generates a list and a video player that will automatically play them all.
Usage is very simple:
- $URL is a single URL, or a ";;"-separated list of URLs (yes, two semicolons)
- To just list songs in order: http://jerous.org/tools/yt-list.php?url=$URL
- To list the songs and shuffle: http://jerous.org/tools/yt-list.php?shuffle=1&url=$URL
Currently there is a limit of 120 videos, imposed by the youtube API.
Try it out on following resources
For simulations I need to call functions in MATLAB and retrieve results from within a omnet++ simulation. There is documentation, however, it took me a while before I got it all working. So, I ended up with a small wrapper class that allows one to evaluate MATLAB code in a running MATLAB instance.
Because MATLAB has to start up the GUI every time the simulation starts (edit: I discovered it has a flag
-nojvm -nodisplay -nodesktop which speeds this up), it gets quite slow and annoying, and thus I also looked at writing a wrapper for octave, a language that is largely compatible with MATLAB.
Another advantage of using octave, instead of MATLAB is that octave is a shared library that runs in the same process, so one can pass around pointers.
By flipping a switch, the code will be either MATLAB or octave based.
The source and very simple, small examples can be found in source.tgz (5 KiB, modified on 27 January 2016, MD5 0b00489725ee2afbd929429f004c2355)
Click the read more link for some more info. read more...
After attempting to extract annotations from my Sony PRS-350 a long time ago, and finding only a very strange format, I decided to not pursue it further (especially as the Internet didn't have any clue about it neither).
Now that I have acquired a new ereader, the kobo aura, I thought I'd visit this topic again, but with my new toy.
My main reason to do this is that I often highlight words in ebooks for lookup. It would be great to extract them so I can have them offline for later reference and practice.
For the Kobo Aura this proves to be a breeze. Thus I made a bash script that will do all of this at once for all books for which there are annotations, and output a HTML table containing the definitions.
Download it from: translate_kobo_annots.sh (6 KiB, modified on 30 December 2015, MD5 86534491a86304faa566ce573a5b6da5) and run
translate_kobo_annots.sh html -d ./dict/ (with your Kobo connected) to create an HTML file.
The read more link shows some more details, and contains a listing of the file.
Call me old-fashioned, but I like feeds: they're clean, they're fast, they're sort of local. The problem is, however, not all sites provide an atom (or RSS) feed, and if they provide one, it sometimes is more than I want.
E.g. someone recommended the language articles by Michele Berdy at The Moscow Times. It provides a RSS feed to all opinions, but not to a specific subset. It does, however, give a HTML listing of most recent articles.
I went out for a quick look on the webz, and found a couple of paying services, and the free Feed Creator byte FiveFilters.org. It seemed to do what I want, but as it wasn't open source and actually seemed quite easy, I did a very basic implementation myself.
It contains three entries:
- title optional title
- url what site to fetch. There is a limit of 1 MiB
- url_contains perform an XPATH query
//a[contains(@href, '$url_contains')]", i.e. return all
a-tags whose href contains url_contains
Currently, there are a couple of keys available that can be used. Values are first converted to lowercase, unless otherwise noted.
- url[not]contains select if the href [not] contains the value. Multiple occurences are
- urltitle[not_]contains select the url if the link's title [not] contains value. Multiple occurences are
- div_class select only urls that are inside a div with a class containing case-sensitive value. Only last occurence will be used.
In the future there'll be some additions, which are probably documented only on http://jerous.org/tools/site2atom.php only.
Feel free to use in any way you see fit, and let me know if it suits you in any way :)