February 25, 2014

How to Remove the Visual Deals Popup Spam

I normally don't do this, but these deals are so annoying, so here's: How to Remove the Visual Deals Popup Spam, its hidden in your pinterest button!

February 10, 2014

gdb tips

I usually debug lLnux programs using gdb, I like the neatness of the tui mode, thanks for that feature! Here are some simple commands to get you started using gdb. There are several major doc out there, this is simply meant as a quick start and remembering reference for me, I'll make no attempt to replace any of these.
  • Starting your program with arguments: gdb -tui --args love_calc john jane
  • Setting a breakpoint in a specific file (in your huge project): b filename.cpp:linenum, i.e. b posix_thread.cpp:25
  • Stepping is s or (step) stepping into is stepi and continue is c.
  • Examining variables is done with p variable_name, i.e. p argv[1]
The above image shows the result of the commands.

There's a small guide from gnu on tui single key mode & a quick guide if you need more than I have shown here.





Typesafe bit mask operations or Bits, bytes and operators.

In production? You're kidding right? No way some network structures contain c bit-fields. Turns out he wasn't kidding (sigh). Sometimes you just have to wonder how, or who, that stuff get's in there.

There's a neat trick in c++ for creating type safe bitmasks using enums and a template class. You'll have to know your C++ operators and how to override these if you're looking for more than just my simple template class. The source is here. You build it with: g++ bitfiled.cpp -o bitfield -Werror

#include <stdio.h>
#include <stddef.h>
#include <stdint.h>
#include <iostream>
#include <bitset>

template<class T,typename S>
class bitfield {
public:

    inline bitfield();
    inline bitfield(const T &bit);

    inline const size_t size() const;
    inline const S num_bits() const;
    inline const S get(const T &bit) const;
    inline const S &get_bits() const;

    inline void set(const T &bit);
    inline void clear(const T &bit);
    inline void toggle(const T &bit);

    inline S operator^=(const T &bit);
    inline S operator|=(const T &bit);
    inline S operator&(const T &bit);
    inline S operator=(const T &bit);

    inline const char *dump()const;

private:    
    S bits;
    static const S zero = 0;
    static const S bit_base = 8;
};

template<typename T,typename U>
inline void set_bit(U &bits,const T mask)
{
    bits |= mask;
}

template<typename T,typename U>
inline void toggle_bit(U &bits,const T mask) 
{
    bits ^= mask;
}

template<typename T,typename U>
inline uint8_t clear_bit(U &bits,const T mask) 
{
    return bits &= ~mask;
}

template<typename T,typename U>
inline const bool is_bit_set(const U &bits,const T mask)
{
    return bits & mask;
}

template<class T,typename S>
inline bitfield<T,S>::bitfield()
{
    bits = zero;
}

template<class T,typename S>
inline bitfield<T,S>::bitfield(const T &bit)
{
    bits = bit;
}

template<class T,typename S>
inline const S &bitfield<T,S>::get_bits() const
{
    return bits;
}

template<class T,typename S>
inline const size_t bitfield<T,S>::size() const
{
    return sizeof(*this);
}

template<class T,typename S>
inline const S bitfield<T,S>::num_bits() const
{
    return size()*bit_base;
}

template<class T,typename S>
inline void bitfield<T,S>::set(const T &bit) 
{
    ::set_bit(bits,bit);
}

template<class T,typename S>
inline void bitfield<T,S>::clear(const T &bit)
{
    ::clear_bit(bits,bit);
}

template<class T,typename S>
inline const S bitfield<T,S>::get(const T &bit) const
{
    return ::is_bit_set(bits,bit);
}

template<class T,typename S>
inline void bitfield<T,S>::toggle(const T &bit)
{
    ::toggle_bit(bits,bit);
}

template<class T,typename S>
inline const char *bitfield<T,S>::dump() const
{
    std::string out;
    for(unsigned int i=num_bits();0!=i;i--)
    {
out += ((1 << (i-1)) & bits) ? "1" : "0";
    }
    return out.c_str();
}

template<class T,typename S>
inline S bitfield<T,S>::operator^=(const T &bit)
{
    ::toggle_bit(bits,bit);
    return bits;
}

template<class T,typename S>
inline S bitfield<T,S>::operator|=(const T &bit)
{
    ::set_bit(bits,bit);
    return bits;
}

template<class T,typename S>
inline S bitfield<T,S>::operator&(const T &bit)
{
    return ::is_bit_set(bits,bit);
}

template<class T,typename S>
inline S bitfield<T,S>::operator=(const T &bit)
{
    return bits = bit;
}

enum Mask16 {
    ab1=0x0001,
    ab2=0x0002,
    ab3=0x0004,
    ab4=0x0008,
    ab5=0x0010,
    ab6=0x0020,
    ab7=0x0040,
    ab8=0x0080,
    ab9=0x0100,
    ab10=0x0200,
    ab11=0x0400,
    ab12=0x0800,
    ab13=0x1000,
    ab14=0x2000,
    ab15=0x4000,
    ab16=0x8000,
};

enum Mask8 {
    b1 = 0x01,
    b2 = 0x02,
    b3 = 0x04,
    b4 = 0x08,
    b5 = 0x10,
    b6 = 0x20,
    b7 = 0x40,
    b8 = 0x80,
};

int main (int argc, char **argv)
{
    bitfield<Mask8,uint8_t> bf8;
    std::cout << "-------------------------------" << std::endl;
    std::cout << "bf8 size: " << bf8.size() << std::endl;
    std::cout << "Bits constructor: " << std::bitset<8>(bf8.get_bits()) << std::endl;

//    bf8.set(b2);
    bf8 |= b2;
    std::cout << "Bit initialized: " << std::bitset<8>(bf8.get_bits()) << std::endl;

    uint8_t bit_flip = 0;
    for(int i=0;i<8;i++)
    {
bit_flip = (1 << i); 
const char *p = (bit_flip<=0x08) ? "0x0" : "0x";
std::cout << "-------------------------------" << std::endl;
std::cout << "Simulated Mask: " << std::bitset<8>(bit_flip) << std::endl;
std::cout << "Simulated Hex : " << p << std::hex << (int)bit_flip << std::endl;

// bf8.toggle(static_cast<Mask8>(bit_flip));
bf8 ^= static_cast<Mask8>(bit_flip);
// (bf8.get(static_cast<Mask8>(bit_flip))) ? std::cout << "true" << std::endl :std::cout << "false" << std::endl ;
(bf8 & static_cast<Mask8>(bit_flip)) ? std::cout << "true" << std::endl :std::cout << "false" << std::endl ;
std::cout << "bf8.bits " << std::bitset<8>(bf8.get_bits()) << std::endl;
    }

    bitfield<Mask16,uint16_t> bf16;
    std::cout << "-------------------------------" << std::endl;
    std::cout << "bf16 size: " << bf16.size() << std::endl;
    std::cout << "Bits constructor: " << std::bitset<16>(bf16.get_bits()) << std::endl;

    bf16.set(ab9);
    std::cout << "Bit initialized: " << std::bitset<16>(bf16.get_bits()) << std::endl;

    bf16 = ab10;
    std::cout << "Bit initialized: " << std::bitset<16>(bf16.get_bits()) << std::endl;
    std::cout << "num bits: " << std::dec << bf16.num_bits() << std::endl;

    // testing for placement in a telegram!
    struct test_telegram {
uint8_t version;
uint8_t type;
bitfield<Mask8,uint8_t> b;
    }tt = {0};

    std::cout << "-------------------------------" << std::endl;
    std::cout << "tt size: " << sizeof(tt) << std::endl;

    tt.b = b3;
    std::cout << "tt.b: " << std::bitset<8>(tt.b.get_bits()) << std::endl;
    std::cout << "tt.b.dump() : " << tt.b.dump() << std::endl;

    bitfield<Mask8,uint8_t> bf_constructor(b5);
    std::cout << "-------------------------------" << std::endl;
    std::cout << "bf_constructor: " << std::bitset<8>(bf_constructor.get_bits()) << std::endl;
    std::cout << "bf_constructor.dump() : " << bf_constructor.dump() << std::endl;

    // Using the template function to manipulate c style!
    ::set_bit(bf_constructor,b3);
    std::cout << "global function - bf_constructor: " << std::bitset<8>(bf_constructor.get_bits()) << std::endl;

//    ::set_bit(bf_constructor,0x08); // error needs a valid Mask8 type or a cast static_cast<Mask8>(0x08)
//    bf_constructor.set(0x08);       // error needs a valid Mask8 type or a cast static_cast<Mask8>(0x08)
//    bf_constructor.get(0x08);       // error needs a valid Mask8 type or a cast static_cast<Mask8>(0x08)
//    bf_constructor.toggle(0x08);    // error needs a valid Mask8 type or a cast static_cast<Mask8>(0x08)
    return 0;
}

I created 4 template functions, set_bit, clear_bit, toggle_bit and is_bit_set. Their purpose is to both serve as a simple interface, and to keep the types in place. They're used through the class, and they could be declared in the class, but for me that would make the functions loose their purpose. 

Notice there are two enumerations, namely, Mask8 and Mask16. These two types are used in the bitfield to ensure the type safety. The main function is meant as a simple test of the type safety. Play with this stuff all you wan't.

You'll notice that you cannot set, say, an int in either of the mask operators (or functions) all will give you a compile error, the sole intent of the implementation. Where do you use it you ask, I'd be using it every where I find a c/c++ bitfield used inside a structure, to keep the code portable on many platforms and to be able to use the above bitfield class in network telegrams.

December 19, 2013

Does size matter

Linux var log files are among the things that you should know when working on Linux systems. I don't yet know all their contents, but I can remember a few of them from earlier debugging sessions. I came across these in search of an expression to give me the largest files present on my system.

# du -ha /var | sort -n -r | head -n 10

The result from this command on my system was:

1004K /var/www/user_journeys/manual_cut_out_list.docx
996K /var/lib/dpkg/info/openjdk-7-doc.list
980K /var/lib/dkms/open-vm-tools/2011.12.20/build/vmxnet
976K /var/lib/dkms/open-vm-tools/2011.12.20/build/vmci/common
968K /var/log/auth.log.1
920K /var/lib/dkms/open-vm-tools/2011.12.20/build/vmsync
912K /var/lib/gconf/defaults/%gconf-tree.xml
884K /var/lib/dkms/virtualbox-guest/4.1.12/build/vboxsf
880K /var/lib/dpkg/info/linux-headers-3.5.0-18.list
864K /var/lib/dpkg/info/linux-headers-3.2.0-57.list

December 18, 2013

And I quote:
Thus, for example, read “munger(1)” as “the ‘munger’ program, which will
be documented in section 1 (user tools) of the Unix manual pages, if it’s present on your system”. Section 2 is C system calls, section 3 is C library calls, section 5 is file formats and protocols, section 8 is system administration tools. Other sections vary among Unixes but are not cited in this book. For more, type man 1 man at your Unix shell prompt.
The Art of Unix Programming.

System informaion

When working on Debian based distributions, and most likely other distribution types, you some times need to get system information, here's a bunch commands to get some info:
lsb_release -a
No LSB modules are available.
Distributor ID: LinuxMint
Description: Linux Mint 13 Maya
Release: 13
Codename: maya

uname -a
Linux SorteSlyngel 3.2.0-31-generic-pae #50-Ubuntu SMP Fri Sep 7 16:39:45 UTC 2012 i686 i686 i386 GNU/Linux

apt-cache policy aptitude
aptitude:
  Installed: 0.6.6-1ubuntu1.2
  Candidate: 0.6.6-1ubuntu1.2
  Version table:
 *** 0.6.6-1ubuntu1.2 0
        500 http://archive.ubuntu.com/ubuntu/ precise-updates/main i386 Packages
        100 /var/lib/dpkg/status
     0.6.6-1ubuntu1 0
        500 http://archive.ubuntu.com/ubuntu/ precise/main i386 Packages

apt-cache policy apt
apt:
  Installed: 0.8.16~exp12ubuntu10.16
  Candidate: 0.8.16~exp12ubuntu10.16
  Version table:
 *** 0.8.16~exp12ubuntu10.16 0
        500 http://archive.ubuntu.com/ubuntu/ precise-updates/main i386 Packages
        100 /var/lib/dpkg/status
     0.8.16~exp12ubuntu10.10 0
        500 http://security.ubuntu.com/ubuntu/ precise-security/main i386 Packages
     0.8.16~exp12ubuntu10 0
        500 http://archive.ubuntu.com/ubuntu/ precise/main i386 Packages

apt-cache policy python-apt
python-apt:
  Installed: 0.8.3ubuntu7.1
  Candidate: 0.8.3ubuntu7.1
  Version table:
 *** 0.8.3ubuntu7.1 0
        500 http://archive.ubuntu.com/ubuntu/ precise-updates/main i386 Packages
        100 /var/lib/dpkg/status
     0.8.3ubuntu7 0
        500 http://archive.ubuntu.com/ubuntu/ precise/main i386 Packages

To get the name of a package providing a file:

dpkg -S /bin/ls
A useful trick for finding missing stuff, I had a missing file when upgrading from mint 12 to 13, the above command solved that issue. To get som info on the package found

dpkg -s grub-common
Short post about Debian Administrators Handbook this should be read by all Debian based systems users to understand the aptitude commands.

December 17, 2013

How many hours did you do on this or that project this year?

Yikes, its getting close to the end of the year and I just know somewhere someone is lurking in the shadows to be checking my daily hour registrations. Unfortunately, this year the tool used to post hours in is the *agile* tool redmine.

This tool is of course not compatible with the company hour registrations, so I better create a report by doing some cmdline stuff. Luckily, redmine can export timespend to a csv file this i what I did.

Once the file is ready the following cmdlines can be of assistance for analysing the timesheet data

1. Find All hours spend in 2013 and dump it to a new file
  grep 2013 timelog.csv > time_2013_me.csv

2. Find all projects worked on in 2013
  cut -d ';' -f 1 time_2013_me.csv | sort | uniq

3. Find total amount of reported hours
  cut -d ';' -f 7 time_2013_me.csv | awk '{total = total + $1}END{print total}'

4. Find all hours on a specific project
  grep <project name> time_2013_me.csv | cut -d ';' -f 7 | awk '{total = total + $1}END{print total}'

5. Find alle de andre end specifikt projekt:
  grep -v <project name> time_2013_me.csv | cut -d ';' -f 7 | awk '{total = total + $1}END{print total}'

That's it and that's that.

December 12, 2013

Updating Linux mint ... Ubuntu style!

Every mint distribution, and Ubuntu distribution for that matter, has a distribution name attached. Linux mint12 is called Lisa, mint13 is called Maya etc. All names for Linux mint versions can be found at: old releases and for Ubuntu at: releases
The dist-upgrade trick is to find your current Linux mint release, if you don’t know the name already, it can be found by issuing:
grep mint /etc/apt/sources.list
deb http://packages.linuxmint.com/ lisa main upstream import
The distribution name is preset at the first space after the URL. To get the Ubuntu name replace mint in the grep expression, the list is a bit longer, but the name is still present in the same place. The Ubuntu version is oneiric

You can follow this approach when upgrading all your mint installations, but you should only go one increment at a time.
The best approach for updating is to install the system you’d like to update in a virtual machine, and then apply the update to that system to see what actually happens. Using this approach may seem somewhat overkill, but it is likely to save you a lot of work when trying to fix your broken installation later.

Before you begin you should know that the mint-search-addon package cannot be resolved.
sudo aptitude remove --purge mint-search-addon
If you do not have a log in manager, i.e. mdm you should install this and log out and back in to check that the new manager works flawlessly.
sudo aptitude install mdm
Installing and configuring mdm should ensure that you’re not logging into X using the start X script, as this will most likely break your x log in after the dist upgrade. The Ubuntu x script will be replacing the mint x script, leaving you at an old school log in shell.
sudo aptitude upgrade && sudo aptitude clean
Once the updates has run reboot to ensure that any unknown dependencies are set straight prior to the actual dist upgrade.
sudo reboot
When the system is back in business it’s time to start the actual dist upgrade. We’ll be doing the upgrade Ubuntu style. Issue (Yes I know it a long command):
sudo cp /etc/apt/sources.list /etc/apt/sources.list.backup && sudo perl -p -i -e "s/lisa/maya/g" /etc/apt/sources.list && sudo perl -p -i -e "s/oneiric/precise/g" /etc/apt/sources.list && sudo apt-get update && sudo apt-get dist-upgrade && sudo apt-get autoremove && sudo apt-get autoclean
Notice that the dist upgrade is done using apt-get and not aptitude, this is recommended by Ubuntu, so I’ll use it here but that is the only place.
You should follow the dist upgrade closely, as there will be issues you’ll need to resolve during the dist upgrade. Issues with respect to your current machine’s configuration.
BEWARE: You should not blindly accept changes to your existing /etc/sudoers file! That is the rationale why -y has not been added to the dist-upgrade command. If you are in doubt select the package maintainers version, I usually do this on any update, your old version will be placed by the dpkg program so you’ll always be able to find your old configuration.
With the new packages installed, cross your fingers, hope for the best, and reboot.
sudo reboot
If you get a black screen when you reboot your (virtual) machine, it is most likely due to the graphical support has failed. Try hitting ctrl+f1 to get a old school log in prompt. Log in, then do the upgrade in the terminal until there’s nothing left.
sudo aptitude upgrade
My upgrade stopped at the mint-search-addon package for Firefox, but you have removed this package so it should not cause any problems. If it does anyway run the remove command from above again. 

Once all the upgrades have run you’re ready for starting X to get a feel of what the problems with going graphical may be.  In my case it was a missing setting for the window manager, I had only lightmd installed which attempted to start an Ubuntu session :(
Use the mdm window manager to get things up an running, if you get the missing file /usr/share/mdm/themes/linuxmint/linuxmint.xml dialog it means you’re missing the mint-mdm-themes package so issuing:
sudo aptitude install mdm mint-mdm-themes mint-backgrounds-maya mint-backgrounds-maya-extra
sudo service mdm start
Should bring you to a graphical log in prompt. All you have to do now is resolve the over dependencies that you may have.
My new kernel should not be reconfigured because a link was missing in grub. /usr/share/grub/grub-mkconfig_lib be either a symbolic link or a copy. Setting the symbolic link to /usr/lib/grub/grub-mkconfig_lib can be fixed by:
cd /usr/share/grub/
ln -s grub-mkconfig_lib /usr/lib/grub/grub-mkconfig_lib
then
dpkg --configure -a
Fixed the missing configurable parts for the new kernel dependencies. Now, the kernel image from the old Linux mint12 system must be removed, because it interrupts aptitude and dpkg. The old kernel is removed by:
sudo aptitude remove linux-image-3.0.0-32-generic
Finally, reboot. Wait. Then, once the system is up again, you should be, you’ll need to activate the new repository sources, since dist-upgrade only sets packages to the distribution packages. It does not automatically choose the latest versions of those packages, issue:
sudo aptitude update
sudo aptitude upgrade
Voila, you should be good to go. Some indies may not be hit, some packages may not be installed check the update status from aptitude update there’s a [-xx] counter in the bottom of the terminal, where - means there’s still stuff you could upgrade. Now, if you're up for it, you should try to update to Linux mint 14 (nadia) from this platform ;)


December 10, 2013

Fortune cookies

Fortune cookies, the only computer cookies worth saving.

                                    -- John Dideriksen

And there I was blogging about setting up a apt repository on you website, when suddenly, creating a fortune cookie package struck me. To use the fortunes you'll need fortunes installed, simply issue:

#sudo aptitude install fortunes


Fortune cookies? But how do you get those into your machine, and how do you handle the crumb issues?? Do you install a mouse?

That awkward moment when someone says something so stupid,
all you can do is stare in disbelief.
                                                                                                                                -- INTJ 
 
Fortune cookies are deeply explained right here, too deeply some may say, but I like it. How you bake your own flavored cookies is a completely different question. But, the recipe is extremely simple. Once you have your test file ready, it should look like this, according to the ancient secret recipe.

Fortune cookies, the only computer cookies worth saving.

                                    -- John Dideriksen
%
That awkward moment when someone says something so stupid,
all you can do is stare in disbelief.
 
                                               -- INTJ 
%
I've found him, I got Jesus in the trunk.

                        -- George Carlin
 
Now all you have to do is to install your fortunes using strfile issue:

#sudo cp quotes /usr/share/fortunes
#sudo strfile /usr/share/fortunes/quotes


And then

#fortune quotes

Should hit one of your freshly baked fortune cookies. Putting these freshly baked babies in a jar, in form of a Debian package, is just as easy. Simply create a Debian package following the recipe here.

November 29, 2013

Making files

Make files, phew!? Admitted I previously used the shell approach by adding mostly bash commands in the files I made. However, I somehow felt I had to change that, since making make files generic and not utilising the full potential of make if both hard and rather stupid.

I have a repository setup for Debian packages, these packages are of two types, some are simple configuration packages and others are Debian packages made from installation ready 3rd party tarballs. I aim at wrapping these 3rd part packages into Debian packages for easy usage for all my repository users.

The setup for creating the packages are a top level directory called packages, which contain the source for each of the individual packages. These could be Altera, a package that contain a setup for the Altera suite or Slickedit, a package containing the setup for Slickedit used in our users development environment.

The common rule set for all the individual package directories are:
1. Each directory must be named after the package i.e Slickedit package is in a directory called Slickedit
2. Each directory must contain a Makefile
3. Each directory must contain a readme.asciidoc file (documentation x3)
5. Each directory must contain a tar.gz file with the source package
4. Each directory must contain at least one source directory appended with -package

The above rules gives the following setup for the Slickedit package:

#tree -L 1 slickedit/
slickedit/
├── Makefile
├── readme.asciidoc
├── slickedit-package
└── slickedit.tar.gz

1 directory, 3 files

The package make file targets from the system that utilizes the package build routine are all, clean, distclean. all extract and builds the Debian package, clean removes the Debian package and distclean removes the extracted contents of the -package directory.

Furthermore, the make file contain three targets named Package, pack and unpack where package builds the debian package from the -package directory, pack creates a tarball for the -package directory in case there are changes to the package source, unpack extract the tarball into the -package directory.

Make file for the packages:
DEBCNTL = DEBIAN
TARFLAGS = -zvf
EXCLUDES = --exclude $(DEBCNTL)

# These are the variales used to setup the various targets
DIRS = $(subst /,,$(shell ls -d */))
PKGS = $(shell ls *.tar.gz)

CLEAN_DIRS:=$(subst package,package/*/,$(DIRS))
DEBS:= $(DIRS:%-package=%.deb)
TGZS:= $(DIRS:%-package=%.tar.gz)

# These are the targets provided by the build system
.PHONY: $(DIRS)

all: unpack package

package: $(DEBS)

pack: $(TGZS)

unpack: 
find . -name '*.tar.gz' -type f -exec tar -x $(TARFLAGS) "{}" \;

clean:
rm -vf $(DEBS)

distclean: clean
ls -d $(CLEAN_DIRS) | grep -v $(DEBCNTL) | xargs rm -fvR 

# These are the stem rules that set the interdependencies
# between the various output types
$(DEBS): %.deb: %-package

$(TGZS): %.tar.gz: %-package

# Stem targets for generating the various outputs
# These are the commands that generate the output files
# .deb, .tar.gz and -package/ directories
%.deb: %-package
fakeroot dpkg-deb --build $< $@

%.tar.gz: %-package
tar -c $(TARFLAGS) $@ $< $(EXCLUDES)

The major pit fall I had when creating the make file was figuring out the rules for the .tar.gz .deb and
-package rules. The first two are straight forward to create using the % modifier, but when creating the
-package target I ran in to a circular dependency. Because the pack and unpack rules have targets that when defined using the static pattern rule are contradictory.

%.tar.gz: %-package 

and the opposite

%-package: %.tar.gz 

Caused the unpack target to execute both the extract and create tarball targets, leading to the error of a Debian file containing only the Debian directory, not much package there. Since the aim was a generic make file, one to be used for all packages, I ended up using the find command to find and extract all tarballs. I figures this was the easiest approach since using the static pattern rules didn't work as intended.

November 15, 2013

Custom spotify playlists

The problem with spotify and iTunes etc is that there's really no good support for playlists. With playlists I mean importing say csv lists. Why would i need a list like this? Because, every now and then I'd like to just create a simple playlist containing the Artis Name and track number as ARTIST,TRACK repeat.

That's where ivyishere comes in, cool online tool that can export your csv, amongst other, playlists to spotify. Thanks.

How do I get a csv playlist, in this case I found the work of another House MD fan who did spend the time to organize all tracks from all shows on a website. I'd like that playlist without hving to add every single song by hand.

This i what I did.

1. copy the neatly ordered html table using you mouse, from top till bottom, selecting all the songs you like in your playlist.

2. open libreoffice calc and paste the songs there, select the html formatted table and wait.

3. Delete the rows you dont need, keeping only artist and track.

4. Copy the artist and track coloumns paste in a new document, then save this document as csv.

5. Fire up your shell and perl the shit outta a the csv.

cat housemd.playlist.csv  # to see what you have to deal with

Figure out the reguar expressions you'll need, yes you can most likely find a more combined expression than I did, nevertheless my way worked ;)


perl -p -e "s/^,$//g" # Remove the lines containing just ,
perl -p -e "s/^\s$//g" # remove lines containing just whitespaces
perl -p -e "s/^'No.*,$//g" # remove th elines containing the No Commer ... text
perl -p -e "s/\"|\'//g" # remove the ' and " from all artists and songs

The final experssion for me was:

perl -p -e "s/^,$//g" housemd.playlist.csv | perl -p -e "s/^'No.*,$//g" | perl -p -e "s/^\s$//g" | perl -p -e "s/\"|\'//g" > housemd.playlist.2.csv

6. Cat your file to see if it is what you'd expect

7. Upload you file to ivyishere and wait.

Thanks perl and ivyishere.


November 04, 2013

Asciidoc filters

Using asciidoc? No?! How come? Because if you're writing technical documentation, blogging or books, this is one of the best programs available.

Asciidoc is installed using you distribution or systems package manager i.e for MacPorts on OSX:

$ sudo port install asciidoc

Substitute port with apt-get, aptitude, yum or whatever the package manager on your system is. Now that asciidoc is installed, simply start writing your document, blog or book in a text file in your favorite editor. Open it and paste the following text.

= My first Asciidoc
:author: <your name>
:toc:

== Introduction
When you're ready to release or upload. Generate an neat looking document using the command:

.Generating an asciidoc

[source,sh]
asciidoc my.first.asciidoc;

To see the document that you have just created, simply follow the document's instructions and run the command, with your document name obviously.

$ asciidoc my.first.asciidoc

The following renders like this using the standart asciidoc theme:


To get started quickly with asciidoc Powerman has created cheat sheet, use this to see some of the things you can do. one of the things not included in the cheat sheet is the fact that asciidoc allows you to execute scripts on the command line.

This is extremely useful when writing technical documentation where you'll need to include information like realtime directory contents information or reporting logged in users directly in your document. 

Adding a shell command ensures that the command is run when the document is being generated, every time the document is generated. Add the following to your document.

.Listing asciidoc's present in the current directory
[source,sh]
----
sys::[ls *.asciidoc]
----

Now, instead of rendering the actual command, asciidoc executes the command and renders the result. In this case the normal output of the /ls *.asciidoc/ command. 

As you can see I have 2 asciidoc files in my current directory. If I wanted to I could include the filter.asciidoc file in the one I'm currently writing. Allowing the file filters.asciidoc to be included. Add the following text to your asciidoc file:

Adding the next document after this line

:leveloffset: 1

include::filters.asciidoc[]

:leveloffset: 0

And were back in the original document.

The include::<filename>[] statement is where the magic happens. This is the line that includes the filter.asciidoc file. The included file doesn't have to be an asciidoc document, any file can be used.

The :leveloffset:1 is needed for asciidoc to treat the included document as headers in its header level. After the include statement we simply pop the header level bask with :leveloffset:0 And were back in the original document.


Notice how the included document is present with in the Adding ... and were back in text. A very useful feature when you have more than one person working on documentation. As this avoids numbers of document merges when you're working on separate files in a team.

Did you notice the cool graphics that was included in the document? This graphic is rendered by an asciidoc plugin called Ditaa.  There are several plugins available for asciidoc, some these can be found on the asciidoc plugin page.

Each of the plugins have installation instructions and usage information included. Here's an example of the ditaa plugin. First download the ditaa.zip file. Then install it to your local users ~/.asciidoc directory.

$ mv Downloads/asciidoc-ditaa-filter-master.zip Downloads/ditaa-filter-master.zip
$ asciidoc --filter install Downloads/ditaa-filter-master.zip

The ditaa plugin is the one rendering the image displayed in the my.first.asciidoc. Here's how it's done. Create a new file called filter.asciidoc and fill it with these contents.

= Asciidoc filters exmple
:author: John Dideriksen
:toc:

== testing various filters on the MacPort edition
The following document is used to test some of the asciidoc plugins for drawing, all examples have been taken from the authors plugin documentation page.

=== Ditaa

["ditaa"]
---------------------------------------------------------------------
    +--------+   +-------+    +-------+
    |        | --+ ditaa +--> |       |
    |  Text  |   +-------+    |diagram|
    |Document|   |!magic!|    |       |
    |     {d}|   |       |    |       |
    +---+----+   +-------+    +-------+
        :                         ^
        |       Lots of work      |
        +-------------------------+
---------------------------------------------------------------------

stops here

As you can see ditaa renders the graphic from an ascii image. This is really useful since you do not have to worry about opening a new program and maintain a separate drawing.

If your documentation is under source control you can easily track the changes in the diagrams for the documentation, just like any other changes. Ditaa is just one of many filters you can install in asciidoc, aafigure is another example of a filter.

You can list your installed asciidoc filters using the command:

$ asciidoc --filter list
/opt/local/etc/asciidoc/filters/code
/opt/local/etc/asciidoc/filters/graphviz
/opt/local/etc/asciidoc/filters/latex
/opt/local/etc/asciidoc/filters/music
/opt/local/etc/asciidoc/filters/source
/Users/cannabissen/.asciidoc/filters/ditaa

Notice, that asciidoc comes with a set of preinstalled plugins that you can use at your will. You remove an installed filter with the command:

$asciidoc --filter remove ditaa

The asciidoc files for this short example can be found downloaded from here: my.first.asciidoc & filters.asciidoc.

June 25, 2013

Viewing your protocol in Wireshark and playing with libpcap

Writing network code eh? At times I am, and for this particular network stuff I needed a protocol dissector for wireshark. As one of these makes it that much easier to verify that you're sending the correct stuff on your wire.

First off, you'll most likely need to modify the wireshark installation to allow specific users to run the tool. This setup will also avoid that you'll be running wireshark as root. [README]

# sudo dpkg-reconfigure wireshark-common

Answer yes to the allow user to capture intefaces. Next, part is that you'll need to add the use(s) to the wireshark group to allow em to use the sniffer tool!

# sudo usermod -a -G wireshark $USER

Of course the $USER will add root! (sudo at the beginning) so you'll have to replace $USER with your user name, or pipe it to the experssion.

Finally, for your group changes to take effect you'll need to log in and out of gnome :O I know, it sucks etc. but that what you'll have to do!

Editorsnote: You can use this neat trick to force logout after package installation in the scripts Neato!

# gnome-session-quit --logout --no-prompt 

Onwards to the protocol stuff: [source]

Open your editor and create a simple lua dissector [source].

Now you'll need libpcap to send some data over the wire. I prefer libpcap as most of this code will be portable to windows using winpcap. This way you won't need to use a strategy pattern for teh socket stuff. As the libpcap/winpcap servers as this pattern.