Automatically remove dead "java" processes, simply replace process with java below
#ps aux | grep <process> | grep <optional other process> | tr -s " " | cut -d" " -f2 | xargs kill -9
The key point here is that the output from the ps command is cluttered with spaces and tabs when used together with cut. So, tr can be used to trim the lines, what the expression does is exactly that . It replaces all spaces with a single space character, allowing us to use this character as a delimiter to cut.
September 22, 2014
September 11, 2014
Testing memory maps on embedded systems
Here's the thing, sharing constant values between FPGA hardware (HW) and software (SW) can be hard for both sides. The constants that need sharing are memory addresses and masks for various registers. The main reason that sharing constants is hard is that no simple framework, at least to my knowledge, exist.
Usually, both sides of constants can be generated from some common format, or the HW side have to create a interface header containing the constants needed for the SW. Either way when a constant is changed it needs to tested properly.
But what happens when the HW side changes and the new FPGA code is released? Mayhem, since the HW doesn't have the complete SW source to test against, as their FPGA project could be in a different system, in a different source control etc.
In this case the HW should at least create a simple program that verifies their naming conventions, addresses and masks are the same as the last release. This is IMHO the smallest possible release test of a new FPGA image that can be run before releasing the HW. I'm not stating this is enough to actually test the HW, but it is enough to ensure that the SW does not break due to new names and wrong constants.
For this I have written a small C program. This program uses the MACROS defined by the HW and tests these. Firstly against C compilation, this step is important because changing a name would break compilation on the SW side, and changes like these should not be possible. Secondly, changing masks and address values should also be verified, at least against the previous version.
This will ensure that the HW side will know exactly what they break on the SW side if they change anything. And they'll clearly have a visual indication of what fails, enabling them to report any changes.
/***
name: macro dumper
version: 0.1
author: john
description:
This program is a simple interface test for the defined memory map.
The program dumps the macro's name and value to stdout.
The program purpose is to test all generated macros against their generated
name and value to ensure an interface test of the generated HW memory map
The program will fail to compile any of the macros change their name according
to the first version of the HW interface.
The test program below is supposed to fail in the deliberate fail macro! This
line may be commented in a release version of the simple test program
Remember that new test cases must be added when new HW is introduced
When you're introducing new macros in the hardware interface the following
regular expressions will assist you in getting the macros from the interface
header and into valuable C code that you can paste into the source's test
sections below.
The comment format change for a reason, namely the regular expression characters
contain both * and / which is comment end ;)
***/
// Regular expressions to:-
// List all macros in the interface with all other things stripped:
//
// egrep "#define [a-zA-Z0-9_]+ +" macro_dumper.c | perl -p -e "s/#define +(.+) \((.*)\)/NAME: \$1 VALUE: \$2/g"
//
// Replace all macros with a corresponding test call for inserting into the name test:
//
// egrep "#define [a-zA-Z0-9_]+ +" macro_dumper.c | perl -p -e "s/#define +(.+) \((.*)\) ?.*/TEST_NAME(\$1);/"
//
// Replace all macros with a corresponding test call for inserting into the value test:
//
// egrep "#define [a-zA-Z0-9_]+ +" macro_dumper.c | perl -p -e "s/#define +(.+) \((.*)\) ?.*/TEST_VALUE(\$1,\$2);/"
#include <stdio.h>
/*
This test result construction is meant for the test to produce an output for any other command line tool.
The purpose is that the test_result returns >0 in case of errors and report these to the OS.
A return value of zero indicates a failed test
*/
static int override_return_value = 0;
static int test_result = 0;
static int test_case = 0;
int assign_test_result(int value,int varname)
{
if(value != varname) {
test_result++;
return 0;
}
return 1;
}
#define TEST_NAME(varname) fprintf(stdout, "%s = %x\n", #varname, (varname));
#define TEST_VALUE(varname,value) fprintf(stdout, "%d - %s = %s\n",++test_case, #varname, (assign_test_result(varname,value))?"ok":"error");
/* #include "memorymap.h" */
/* The macros below are ment to be placed in the memorymap file provided by HW, here they are simply a test of the concept */
#define TEST1_MACRO (0x010000000) /* emulating some start address */
#define TEST2_MACRO (0x2000) /* emulating some offset */
#define TEST3_MACRO (TEST1_MACRO + TEST2_MACRO) /* Check to see that the macro names are reported correctly when combined */
#define TEST_MACRO_MASK (0x0010) /* Test for some kind of mask */
#define DELIBERATE_ERROR_MACRO (0xff) /* The check for this macro should not be 0xff as we deliberately want this check to fail */
int main(int argc, char** argv)
{
override_return_value = argc-1;
fprintf(stdout,"%s","This program checks the name and address values of memory map macros\n");
if(override_return_value) {
fprintf(stdout,"%s","Running in override return value mode, errors are only reported not breaking!\n");
}
/*
This is the name test part of the program, any non complying marcos will cause a compilation error
on the exact line of the non existing macro name. The test is valuable for finding non compatible
name changes in the interface
*/
TEST_NAME(TEST1_MACRO);
TEST_NAME(TEST2_MACRO);
TEST_NAME(TEST3_MACRO);
TEST_NAME(TEST_MACRO_MASK);
TEST_NAME(DELIBERATE_ERROR_MACRO);
/*
This part tests against the original macro values to ensure that we know when memory addresses actually change
This test is to ensure that we know when a macro value changes. This is not a test to break the interface
It's simply an indication that an address or mask has changed.
*/
TEST_VALUE(TEST1_MACRO,0x010000000);
TEST_VALUE(TEST2_MACRO,0x2000);
TEST_VALUE(TEST3_MACRO,0x010000000 + 0x2000);
TEST_VALUE(TEST_MACRO_MASK,0x0010);
TEST_VALUE(DELIBERATE_ERROR_MACRO,4); /* This is a deliberate error to test the test macro ;) */
/*
Reporting test case statistics
*/
fprintf(stdout,"Tests run: %d\n",test_case);
fprintf(stdout,"Failures: %d\n",test_result);
return (override_return_value)?test_result:0;
}
The program is compiled using gcc or equivalent by issuing:
john@BlackWidow gcc macro_dumper.c -o macro_dumper
In your terminal. You run the program by issuing:
john@BlackWidow ~/john/src/c $ ./macro_dumper
This program checks the name and address values of memory map macros
TEST1_MACRO = 10000000
TEST2_MACRO = 2000
TEST3_MACRO = 10002000
TEST_MACRO_MASK = 10
DELIBERATE_ERROR_MACRO = ff
1 - TEST1_MACRO = ok
2 - TEST2_MACRO = ok
3 - TEST3_MACRO = ok
4 - TEST_MACRO_MASK = ok
5 - DELIBERATE_ERROR_MACRO = error
Tests run: 5
Failures: 1
john@BlackWidow ~/john/src/c $ echo $?
0
Above you can see the output from the program. Keep in mind that the program should be compiled at every HW release and run to check against the interface.
The program is constructed in such a way that giving 1 or more arguments of any type will cause the program to report test case failure to the OS meaning that it could be used to break a Jenkins build bot on failures.
Yes it can be expanded. Yes it can be re factored. Yes there's lots of potential for improvement. But for now it's the smallest possible thing that will safe many hours of non working interfaces if something went wrong.
Usually, both sides of constants can be generated from some common format, or the HW side have to create a interface header containing the constants needed for the SW. Either way when a constant is changed it needs to tested properly.
But what happens when the HW side changes and the new FPGA code is released? Mayhem, since the HW doesn't have the complete SW source to test against, as their FPGA project could be in a different system, in a different source control etc.
In this case the HW should at least create a simple program that verifies their naming conventions, addresses and masks are the same as the last release. This is IMHO the smallest possible release test of a new FPGA image that can be run before releasing the HW. I'm not stating this is enough to actually test the HW, but it is enough to ensure that the SW does not break due to new names and wrong constants.
For this I have written a small C program. This program uses the MACROS defined by the HW and tests these. Firstly against C compilation, this step is important because changing a name would break compilation on the SW side, and changes like these should not be possible. Secondly, changing masks and address values should also be verified, at least against the previous version.
This will ensure that the HW side will know exactly what they break on the SW side if they change anything. And they'll clearly have a visual indication of what fails, enabling them to report any changes.
/***
name: macro dumper
version: 0.1
author: john
description:
This program is a simple interface test for the defined memory map.
The program dumps the macro's name and value to stdout.
The program purpose is to test all generated macros against their generated
name and value to ensure an interface test of the generated HW memory map
The program will fail to compile any of the macros change their name according
to the first version of the HW interface.
The test program below is supposed to fail in the deliberate fail macro! This
line may be commented in a release version of the simple test program
Remember that new test cases must be added when new HW is introduced
When you're introducing new macros in the hardware interface the following
regular expressions will assist you in getting the macros from the interface
header and into valuable C code that you can paste into the source's test
sections below.
The comment format change for a reason, namely the regular expression characters
contain both * and / which is comment end ;)
***/
// Regular expressions to:-
// List all macros in the interface with all other things stripped:
//
// egrep "#define [a-zA-Z0-9_]+ +" macro_dumper.c | perl -p -e "s/#define +(.+) \((.*)\)/NAME: \$1 VALUE: \$2/g"
//
// Replace all macros with a corresponding test call for inserting into the name test:
//
// egrep "#define [a-zA-Z0-9_]+ +" macro_dumper.c | perl -p -e "s/#define +(.+) \((.*)\) ?.*/TEST_NAME(\$1);/"
//
// Replace all macros with a corresponding test call for inserting into the value test:
//
// egrep "#define [a-zA-Z0-9_]+ +" macro_dumper.c | perl -p -e "s/#define +(.+) \((.*)\) ?.*/TEST_VALUE(\$1,\$2);/"
#include <stdio.h>
/*
This test result construction is meant for the test to produce an output for any other command line tool.
The purpose is that the test_result returns >0 in case of errors and report these to the OS.
A return value of zero indicates a failed test
*/
static int override_return_value = 0;
static int test_result = 0;
static int test_case = 0;
int assign_test_result(int value,int varname)
{
if(value != varname) {
test_result++;
return 0;
}
return 1;
}
#define TEST_NAME(varname) fprintf(stdout, "%s = %x\n", #varname, (varname));
#define TEST_VALUE(varname,value) fprintf(stdout, "%d - %s = %s\n",++test_case, #varname, (assign_test_result(varname,value))?"ok":"error");
/* #include "memorymap.h" */
/* The macros below are ment to be placed in the memorymap file provided by HW, here they are simply a test of the concept */
#define TEST1_MACRO (0x010000000) /* emulating some start address */
#define TEST2_MACRO (0x2000) /* emulating some offset */
#define TEST3_MACRO (TEST1_MACRO + TEST2_MACRO) /* Check to see that the macro names are reported correctly when combined */
#define TEST_MACRO_MASK (0x0010) /* Test for some kind of mask */
#define DELIBERATE_ERROR_MACRO (0xff) /* The check for this macro should not be 0xff as we deliberately want this check to fail */
int main(int argc, char** argv)
{
override_return_value = argc-1;
fprintf(stdout,"%s","This program checks the name and address values of memory map macros\n");
if(override_return_value) {
fprintf(stdout,"%s","Running in override return value mode, errors are only reported not breaking!\n");
}
/*
This is the name test part of the program, any non complying marcos will cause a compilation error
on the exact line of the non existing macro name. The test is valuable for finding non compatible
name changes in the interface
*/
TEST_NAME(TEST1_MACRO);
TEST_NAME(TEST2_MACRO);
TEST_NAME(TEST3_MACRO);
TEST_NAME(TEST_MACRO_MASK);
TEST_NAME(DELIBERATE_ERROR_MACRO);
/*
This part tests against the original macro values to ensure that we know when memory addresses actually change
This test is to ensure that we know when a macro value changes. This is not a test to break the interface
It's simply an indication that an address or mask has changed.
*/
TEST_VALUE(TEST1_MACRO,0x010000000);
TEST_VALUE(TEST2_MACRO,0x2000);
TEST_VALUE(TEST3_MACRO,0x010000000 + 0x2000);
TEST_VALUE(TEST_MACRO_MASK,0x0010);
TEST_VALUE(DELIBERATE_ERROR_MACRO,4); /* This is a deliberate error to test the test macro ;) */
/*
Reporting test case statistics
*/
fprintf(stdout,"Tests run: %d\n",test_case);
fprintf(stdout,"Failures: %d\n",test_result);
return (override_return_value)?test_result:0;
}
The program is compiled using gcc or equivalent by issuing:
john@BlackWidow gcc macro_dumper.c -o macro_dumper
In your terminal. You run the program by issuing:
john@BlackWidow ~/john/src/c $ ./macro_dumper
This program checks the name and address values of memory map macros
TEST1_MACRO = 10000000
TEST2_MACRO = 2000
TEST3_MACRO = 10002000
TEST_MACRO_MASK = 10
DELIBERATE_ERROR_MACRO = ff
1 - TEST1_MACRO = ok
2 - TEST2_MACRO = ok
3 - TEST3_MACRO = ok
4 - TEST_MACRO_MASK = ok
5 - DELIBERATE_ERROR_MACRO = error
Tests run: 5
Failures: 1
john@BlackWidow ~/john/src/c $ echo $?
0
Above you can see the output from the program. Keep in mind that the program should be compiled at every HW release and run to check against the interface.
The program is constructed in such a way that giving 1 or more arguments of any type will cause the program to report test case failure to the OS meaning that it could be used to break a Jenkins build bot on failures.
Yes it can be expanded. Yes it can be re factored. Yes there's lots of potential for improvement. But for now it's the smallest possible thing that will safe many hours of non working interfaces if something went wrong.
September 09, 2014
Matplotlib on mint 13
Today, I had a friend who needed to use matplotlib on our company linux boxes. We have some issues with out proxy server that rules python pip out. This mixed with python version and other dependencies lead me to write this small script, as a kind of configure script, for his code project.
The script deals with installing python3 and other prebuilt packages for ubuntu (mint) and then uses python3 build mechanisms & easy_install to add any remaining python egg dependencies.
The script source & listing:
#!/bin/sh
#
# This is the friendly python3 matplotlib installer script
# it checks for all matplotlib dependencies as pr. Sep 9 2014
# Any additions to the installer must be added by hand!
#
# If none of the dependencies are installed, the script will:
# 1. install prebuild ubuntu packages
# 2. Create a local dir for downloaded files
# 3. install dependencies by downloading these from various sources
# 3.a cython
# 3.b numpy, scipy
# 3.c pyttk, six, tkinter, pip,matplotlib
# 4. remove the directory after installation
#
# john
# 1.
sudo aptitude install python3 python3-dev python3-setuptools python3-tk python3-cairo python3-cairo-dev libpng-dev
sudo aptitude install gfortran libopenblas-dev liblapack-dev
sudo easy_install3 -U distribute
# 2.
matlib=/tmp/matlib
[ -d $matlib ] && sudo rm -rf $matlib
mkdir $matlib && cd $matlib
# 3. Install dependencies, these vary and will change for furure releases!
# As the matplotlib we use in the dse require bleeding edge, we'll grap the latest stuff from the bleeding repositories
# 3.a
wget http://cython.org/release/Cython-0.20.2.tar.gz && tar -zxvf Cython-0.20.2.tar.gz && cd Cython-0.20.2/ && sudo python3 setup.py install && cd -
# 3.b
#install bleeding edge eggs from git repositories
for package in numpy scipy; do
[ -d $package ] && sudo rm -rf $package
git clone http://github.com/$package/$package.git $package && cd $package && sudo python3 setup.py install && cd -
done
# 3.c
# install bleeding edge eggs using easy_install
# NOTICE: this loop can only be extented if the egg name is the same in both the python code and module name in bash!
for egg in pip six tkinter pyttk matplotlib; do
sudo easy_install3 $egg
done
pip freeze
cd -
# 4.
sudo rm -rf $matlib
The script deals with installing python3 and other prebuilt packages for ubuntu (mint) and then uses python3 build mechanisms & easy_install to add any remaining python egg dependencies.
The script source & listing:
#!/bin/sh
#
# This is the friendly python3 matplotlib installer script
# it checks for all matplotlib dependencies as pr. Sep 9 2014
# Any additions to the installer must be added by hand!
#
# If none of the dependencies are installed, the script will:
# 1. install prebuild ubuntu packages
# 2. Create a local dir for downloaded files
# 3. install dependencies by downloading these from various sources
# 3.a cython
# 3.b numpy, scipy
# 3.c pyttk, six, tkinter, pip,matplotlib
# 4. remove the directory after installation
#
# john
# 1.
sudo aptitude install python3 python3-dev python3-setuptools python3-tk python3-cairo python3-cairo-dev libpng-dev
sudo aptitude install gfortran libopenblas-dev liblapack-dev
sudo easy_install3 -U distribute
# 2.
matlib=/tmp/matlib
[ -d $matlib ] && sudo rm -rf $matlib
mkdir $matlib && cd $matlib
# 3. Install dependencies, these vary and will change for furure releases!
# As the matplotlib we use in the dse require bleeding edge, we'll grap the latest stuff from the bleeding repositories
# 3.a
wget http://cython.org/release/Cython-0.20.2.tar.gz && tar -zxvf Cython-0.20.2.tar.gz && cd Cython-0.20.2/ && sudo python3 setup.py install && cd -
# 3.b
#install bleeding edge eggs from git repositories
for package in numpy scipy; do
[ -d $package ] && sudo rm -rf $package
git clone http://github.com/$package/$package.git $package && cd $package && sudo python3 setup.py install && cd -
done
# 3.c
# install bleeding edge eggs using easy_install
# NOTICE: this loop can only be extented if the egg name is the same in both the python code and module name in bash!
for egg in pip six tkinter pyttk matplotlib; do
sudo easy_install3 $egg
done
pip freeze
cd -
# 4.
sudo rm -rf $matlib
April 03, 2014
A software company that dosen't suck
Short post, Great article.
March 31, 2014
Configuring VNC connections to you Box
A quick entry today. I needed to rid the *missing ubuntu session* dialog when connecting via VNC to my box@work.
So, I edited the vnc xstartup file:
#!/bin/sh
unset SESSION_MANAGER
gnome-session --session=gnome-classic &
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &
Finally start the vncserver with something like:
#vncserver -geometry 1280x1024 -depth 24
In depth resource for this issue.
So, I edited the vnc xstartup file:
#!/bin/sh
unset SESSION_MANAGER
gnome-session --session=gnome-classic &
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &
Finally start the vncserver with something like:
#vncserver -geometry 1280x1024 -depth 24
In depth resource for this issue.
March 04, 2014
Assogiate
But Mooooooom it wan't it like window$. Me just pointy clicky and then like magic the file i clicked opens in my favorite editor! Sigh... Did you ever have programmers (users) that worked their entire career on a single platform. Programmers that think everything should work like it used to... Programmers unwilling to learn new stuff?
I have a bunch of programmer rednecks who apparently need to point and click in "the configured file manager" to be able to open a source file. And these programmers want the same behavior on Linux as they have in Window$ file Exploder ... Sigh ...
Here's how: http://ubuntugenius.wordpress.com/2009/11/19/create-your-own-file-types-in-ubuntu-with-assogiate/. In my case I had to create a debian package carrying the settings, enabling these programmers to use synaptic to install the extension.
Programmers? ... Go figure ..
I have a bunch of programmer rednecks who apparently need to point and click in "the configured file manager" to be able to open a source file. And these programmers want the same behavior on Linux as they have in Window$ file Exploder ... Sigh ...
Here's how: http://ubuntugenius.wordpress.com/2009/11/19/create-your-own-file-types-in-ubuntu-with-assogiate/. In my case I had to create a debian package carrying the settings, enabling these programmers to use synaptic to install the extension.
Programmers? ... Go figure ..
February 25, 2014
How to Remove the Visual Deals Popup Spam
I normally don't do this, but these deals are so annoying, so here's: How to Remove the Visual Deals Popup Spam, its hidden in your pinterest button!
February 10, 2014
gdb tips
I usually debug lLnux programs using gdb, I like the neatness of the tui mode, thanks for that feature! Here are some simple commands to get you started using gdb. There are several major doc out there, this is simply meant as a quick start and remembering reference for me, I'll make no attempt to replace any of these.
There's a small guide from gnu on tui single key mode & a quick guide if you need more than I have shown here.
- Starting your program with arguments: gdb -tui --args love_calc john jane
- Setting a breakpoint in a specific file (in your huge project): b filename.cpp:linenum, i.e. b posix_thread.cpp:25
- Stepping is s or (step) stepping into is stepi and continue is c.
- Examining variables is done with p variable_name, i.e. p argv[1]
The above image shows the result of the commands.
There's a small guide from gnu on tui single key mode & a quick guide if you need more than I have shown here.
Typesafe bit mask operations or Bits, bytes and operators.
In production? You're kidding right? No way some network structures contain c bit-fields. Turns out he wasn't kidding (sigh). Sometimes you just have to wonder how, or who, that stuff get's in there.
There's a neat trick in c++ for creating type safe bitmasks using enums and a template class. You'll have to know your C++ operators and how to override these if you're looking for more than just my simple template class. The source is here. You build it with: g++ bitfiled.cpp -o bitfield -Werror
#include <stdio.h>
#include <stddef.h>
#include <stdint.h>
#include <iostream>
#include <bitset>
template<class T,typename S>
class bitfield {
public:
inline bitfield();
inline bitfield(const T &bit);
inline const size_t size() const;
inline const S num_bits() const;
inline const S get(const T &bit) const;
inline const S &get_bits() const;
inline void set(const T &bit);
inline void clear(const T &bit);
inline void toggle(const T &bit);
inline S operator^=(const T &bit);
inline S operator|=(const T &bit);
inline S operator&(const T &bit);
inline S operator=(const T &bit);
inline const char *dump()const;
private:
S bits;
static const S zero = 0;
static const S bit_base = 8;
};
template<typename T,typename U>
inline void set_bit(U &bits,const T mask)
{
bits |= mask;
}
template<typename T,typename U>
inline void toggle_bit(U &bits,const T mask)
{
bits ^= mask;
}
template<typename T,typename U>
inline uint8_t clear_bit(U &bits,const T mask)
{
return bits &= ~mask;
}
template<typename T,typename U>
inline const bool is_bit_set(const U &bits,const T mask)
{
return bits & mask;
}
template<class T,typename S>
inline bitfield<T,S>::bitfield()
{
bits = zero;
}
template<class T,typename S>
inline bitfield<T,S>::bitfield(const T &bit)
{
bits = bit;
}
template<class T,typename S>
inline const S &bitfield<T,S>::get_bits() const
{
return bits;
}
template<class T,typename S>
inline const size_t bitfield<T,S>::size() const
{
return sizeof(*this);
}
template<class T,typename S>
inline const S bitfield<T,S>::num_bits() const
{
return size()*bit_base;
}
template<class T,typename S>
inline void bitfield<T,S>::set(const T &bit)
{
::set_bit(bits,bit);
}
template<class T,typename S>
inline void bitfield<T,S>::clear(const T &bit)
{
::clear_bit(bits,bit);
}
template<class T,typename S>
inline const S bitfield<T,S>::get(const T &bit) const
{
return ::is_bit_set(bits,bit);
}
template<class T,typename S>
inline void bitfield<T,S>::toggle(const T &bit)
{
::toggle_bit(bits,bit);
}
template<class T,typename S>
inline const char *bitfield<T,S>::dump() const
{
std::string out;
for(unsigned int i=num_bits();0!=i;i--)
{
out += ((1 << (i-1)) & bits) ? "1" : "0";
}
return out.c_str();
}
template<class T,typename S>
inline S bitfield<T,S>::operator^=(const T &bit)
{
::toggle_bit(bits,bit);
return bits;
}
template<class T,typename S>
inline S bitfield<T,S>::operator|=(const T &bit)
{
::set_bit(bits,bit);
return bits;
}
template<class T,typename S>
inline S bitfield<T,S>::operator&(const T &bit)
{
return ::is_bit_set(bits,bit);
}
template<class T,typename S>
inline S bitfield<T,S>::operator=(const T &bit)
{
return bits = bit;
}
enum Mask16 {
ab1=0x0001,
ab2=0x0002,
ab3=0x0004,
ab4=0x0008,
ab5=0x0010,
ab6=0x0020,
ab7=0x0040,
ab8=0x0080,
ab9=0x0100,
ab10=0x0200,
ab11=0x0400,
ab12=0x0800,
ab13=0x1000,
ab14=0x2000,
ab15=0x4000,
ab16=0x8000,
};
enum Mask8 {
b1 = 0x01,
b2 = 0x02,
b3 = 0x04,
b4 = 0x08,
b5 = 0x10,
b6 = 0x20,
b7 = 0x40,
b8 = 0x80,
};
int main (int argc, char **argv)
{
bitfield<Mask8,uint8_t> bf8;
std::cout << "-------------------------------" << std::endl;
std::cout << "bf8 size: " << bf8.size() << std::endl;
std::cout << "Bits constructor: " << std::bitset<8>(bf8.get_bits()) << std::endl;
// bf8.set(b2);
bf8 |= b2;
std::cout << "Bit initialized: " << std::bitset<8>(bf8.get_bits()) << std::endl;
uint8_t bit_flip = 0;
for(int i=0;i<8;i++)
{
bit_flip = (1 << i);
const char *p = (bit_flip<=0x08) ? "0x0" : "0x";
std::cout << "-------------------------------" << std::endl;
std::cout << "Simulated Mask: " << std::bitset<8>(bit_flip) << std::endl;
std::cout << "Simulated Hex : " << p << std::hex << (int)bit_flip << std::endl;
// bf8.toggle(static_cast<Mask8>(bit_flip));
bf8 ^= static_cast<Mask8>(bit_flip);
// (bf8.get(static_cast<Mask8>(bit_flip))) ? std::cout << "true" << std::endl :std::cout << "false" << std::endl ;
(bf8 & static_cast<Mask8>(bit_flip)) ? std::cout << "true" << std::endl :std::cout << "false" << std::endl ;
std::cout << "bf8.bits " << std::bitset<8>(bf8.get_bits()) << std::endl;
}
bitfield<Mask16,uint16_t> bf16;
std::cout << "-------------------------------" << std::endl;
std::cout << "bf16 size: " << bf16.size() << std::endl;
std::cout << "Bits constructor: " << std::bitset<16>(bf16.get_bits()) << std::endl;
bf16.set(ab9);
std::cout << "Bit initialized: " << std::bitset<16>(bf16.get_bits()) << std::endl;
bf16 = ab10;
std::cout << "Bit initialized: " << std::bitset<16>(bf16.get_bits()) << std::endl;
std::cout << "num bits: " << std::dec << bf16.num_bits() << std::endl;
// testing for placement in a telegram!
struct test_telegram {
uint8_t version;
uint8_t type;
bitfield<Mask8,uint8_t> b;
}tt = {0};
std::cout << "-------------------------------" << std::endl;
std::cout << "tt size: " << sizeof(tt) << std::endl;
tt.b = b3;
std::cout << "tt.b: " << std::bitset<8>(tt.b.get_bits()) << std::endl;
std::cout << "tt.b.dump() : " << tt.b.dump() << std::endl;
bitfield<Mask8,uint8_t> bf_constructor(b5);
std::cout << "-------------------------------" << std::endl;
std::cout << "bf_constructor: " << std::bitset<8>(bf_constructor.get_bits()) << std::endl;
std::cout << "bf_constructor.dump() : " << bf_constructor.dump() << std::endl;
// Using the template function to manipulate c style!
::set_bit(bf_constructor,b3);
std::cout << "global function - bf_constructor: " << std::bitset<8>(bf_constructor.get_bits()) << std::endl;
// ::set_bit(bf_constructor,0x08); // error needs a valid Mask8 type or a cast static_cast<Mask8>(0x08)
// bf_constructor.set(0x08); // error needs a valid Mask8 type or a cast static_cast<Mask8>(0x08)
// bf_constructor.get(0x08); // error needs a valid Mask8 type or a cast static_cast<Mask8>(0x08)
// bf_constructor.toggle(0x08); // error needs a valid Mask8 type or a cast static_cast<Mask8>(0x08)
return 0;
}
Notice there are two enumerations, namely, Mask8 and Mask16. These two types are used in the bitfield to ensure the type safety. The main function is meant as a simple test of the type safety. Play with this stuff all you wan't.
You'll notice that you cannot set, say, an int in either of the mask operators (or functions) all will give you a compile error, the sole intent of the implementation. Where do you use it you ask, I'd be using it every where I find a c/c++ bitfield used inside a structure, to keep the code portable on many platforms and to be able to use the above bitfield class in network telegrams.
There's a neat trick in c++ for creating type safe bitmasks using enums and a template class. You'll have to know your C++ operators and how to override these if you're looking for more than just my simple template class. The source is here. You build it with: g++ bitfiled.cpp -o bitfield -Werror
#include <stdio.h>
#include <stddef.h>
#include <stdint.h>
#include <iostream>
#include <bitset>
template<class T,typename S>
class bitfield {
public:
inline bitfield();
inline bitfield(const T &bit);
inline const size_t size() const;
inline const S num_bits() const;
inline const S get(const T &bit) const;
inline const S &get_bits() const;
inline void set(const T &bit);
inline void clear(const T &bit);
inline void toggle(const T &bit);
inline S operator^=(const T &bit);
inline S operator|=(const T &bit);
inline S operator&(const T &bit);
inline S operator=(const T &bit);
inline const char *dump()const;
private:
S bits;
static const S zero = 0;
static const S bit_base = 8;
};
template<typename T,typename U>
inline void set_bit(U &bits,const T mask)
{
bits |= mask;
}
template<typename T,typename U>
inline void toggle_bit(U &bits,const T mask)
{
bits ^= mask;
}
template<typename T,typename U>
inline uint8_t clear_bit(U &bits,const T mask)
{
return bits &= ~mask;
}
template<typename T,typename U>
inline const bool is_bit_set(const U &bits,const T mask)
{
return bits & mask;
}
template<class T,typename S>
inline bitfield<T,S>::bitfield()
{
bits = zero;
}
template<class T,typename S>
inline bitfield<T,S>::bitfield(const T &bit)
{
bits = bit;
}
template<class T,typename S>
inline const S &bitfield<T,S>::get_bits() const
{
return bits;
}
template<class T,typename S>
inline const size_t bitfield<T,S>::size() const
{
return sizeof(*this);
}
template<class T,typename S>
inline const S bitfield<T,S>::num_bits() const
{
return size()*bit_base;
}
template<class T,typename S>
inline void bitfield<T,S>::set(const T &bit)
{
::set_bit(bits,bit);
}
template<class T,typename S>
inline void bitfield<T,S>::clear(const T &bit)
{
::clear_bit(bits,bit);
}
template<class T,typename S>
inline const S bitfield<T,S>::get(const T &bit) const
{
return ::is_bit_set(bits,bit);
}
template<class T,typename S>
inline void bitfield<T,S>::toggle(const T &bit)
{
::toggle_bit(bits,bit);
}
template<class T,typename S>
inline const char *bitfield<T,S>::dump() const
{
std::string out;
for(unsigned int i=num_bits();0!=i;i--)
{
out += ((1 << (i-1)) & bits) ? "1" : "0";
}
return out.c_str();
}
template<class T,typename S>
inline S bitfield<T,S>::operator^=(const T &bit)
{
::toggle_bit(bits,bit);
return bits;
}
template<class T,typename S>
inline S bitfield<T,S>::operator|=(const T &bit)
{
::set_bit(bits,bit);
return bits;
}
template<class T,typename S>
inline S bitfield<T,S>::operator&(const T &bit)
{
return ::is_bit_set(bits,bit);
}
template<class T,typename S>
inline S bitfield<T,S>::operator=(const T &bit)
{
return bits = bit;
}
enum Mask16 {
ab1=0x0001,
ab2=0x0002,
ab3=0x0004,
ab4=0x0008,
ab5=0x0010,
ab6=0x0020,
ab7=0x0040,
ab8=0x0080,
ab9=0x0100,
ab10=0x0200,
ab11=0x0400,
ab12=0x0800,
ab13=0x1000,
ab14=0x2000,
ab15=0x4000,
ab16=0x8000,
};
enum Mask8 {
b1 = 0x01,
b2 = 0x02,
b3 = 0x04,
b4 = 0x08,
b5 = 0x10,
b6 = 0x20,
b7 = 0x40,
b8 = 0x80,
};
int main (int argc, char **argv)
{
bitfield<Mask8,uint8_t> bf8;
std::cout << "-------------------------------" << std::endl;
std::cout << "bf8 size: " << bf8.size() << std::endl;
std::cout << "Bits constructor: " << std::bitset<8>(bf8.get_bits()) << std::endl;
// bf8.set(b2);
bf8 |= b2;
std::cout << "Bit initialized: " << std::bitset<8>(bf8.get_bits()) << std::endl;
uint8_t bit_flip = 0;
for(int i=0;i<8;i++)
{
bit_flip = (1 << i);
const char *p = (bit_flip<=0x08) ? "0x0" : "0x";
std::cout << "-------------------------------" << std::endl;
std::cout << "Simulated Mask: " << std::bitset<8>(bit_flip) << std::endl;
std::cout << "Simulated Hex : " << p << std::hex << (int)bit_flip << std::endl;
// bf8.toggle(static_cast<Mask8>(bit_flip));
bf8 ^= static_cast<Mask8>(bit_flip);
// (bf8.get(static_cast<Mask8>(bit_flip))) ? std::cout << "true" << std::endl :std::cout << "false" << std::endl ;
(bf8 & static_cast<Mask8>(bit_flip)) ? std::cout << "true" << std::endl :std::cout << "false" << std::endl ;
std::cout << "bf8.bits " << std::bitset<8>(bf8.get_bits()) << std::endl;
}
bitfield<Mask16,uint16_t> bf16;
std::cout << "-------------------------------" << std::endl;
std::cout << "bf16 size: " << bf16.size() << std::endl;
std::cout << "Bits constructor: " << std::bitset<16>(bf16.get_bits()) << std::endl;
bf16.set(ab9);
std::cout << "Bit initialized: " << std::bitset<16>(bf16.get_bits()) << std::endl;
bf16 = ab10;
std::cout << "Bit initialized: " << std::bitset<16>(bf16.get_bits()) << std::endl;
std::cout << "num bits: " << std::dec << bf16.num_bits() << std::endl;
// testing for placement in a telegram!
struct test_telegram {
uint8_t version;
uint8_t type;
bitfield<Mask8,uint8_t> b;
}tt = {0};
std::cout << "-------------------------------" << std::endl;
std::cout << "tt size: " << sizeof(tt) << std::endl;
tt.b = b3;
std::cout << "tt.b: " << std::bitset<8>(tt.b.get_bits()) << std::endl;
std::cout << "tt.b.dump() : " << tt.b.dump() << std::endl;
bitfield<Mask8,uint8_t> bf_constructor(b5);
std::cout << "-------------------------------" << std::endl;
std::cout << "bf_constructor: " << std::bitset<8>(bf_constructor.get_bits()) << std::endl;
std::cout << "bf_constructor.dump() : " << bf_constructor.dump() << std::endl;
// Using the template function to manipulate c style!
::set_bit(bf_constructor,b3);
std::cout << "global function - bf_constructor: " << std::bitset<8>(bf_constructor.get_bits()) << std::endl;
// ::set_bit(bf_constructor,0x08); // error needs a valid Mask8 type or a cast static_cast<Mask8>(0x08)
// bf_constructor.set(0x08); // error needs a valid Mask8 type or a cast static_cast<Mask8>(0x08)
// bf_constructor.get(0x08); // error needs a valid Mask8 type or a cast static_cast<Mask8>(0x08)
// bf_constructor.toggle(0x08); // error needs a valid Mask8 type or a cast static_cast<Mask8>(0x08)
return 0;
}
I created 4 template functions, set_bit, clear_bit, toggle_bit and is_bit_set. Their purpose is to both serve as a simple interface, and to keep the types in place. They're used through the class, and they could be declared in the class, but for me that would make the functions loose their purpose.
You'll notice that you cannot set, say, an int in either of the mask operators (or functions) all will give you a compile error, the sole intent of the implementation. Where do you use it you ask, I'd be using it every where I find a c/c++ bitfield used inside a structure, to keep the code portable on many platforms and to be able to use the above bitfield class in network telegrams.
Labels:
bit operations,
bitfield,
bitmasks,
C++,
template
December 19, 2013
Does size matter
Linux var log files are among the things that you should know when working on Linux systems. I don't yet know all their contents, but I can remember a few of them from earlier debugging sessions. I came across these in search of an expression to give me the largest files present on my system.
# du -ha /var | sort -n -r | head -n 10
The result from this command on my system was:
1004K /var/www/user_journeys/manual_cut_out_list.docx
996K /var/lib/dpkg/info/openjdk-7-doc.list
980K /var/lib/dkms/open-vm-tools/2011.12.20/build/vmxnet
976K /var/lib/dkms/open-vm-tools/2011.12.20/build/vmci/common
968K /var/log/auth.log.1
920K /var/lib/dkms/open-vm-tools/2011.12.20/build/vmsync
912K /var/lib/gconf/defaults/%gconf-tree.xml
884K /var/lib/dkms/virtualbox-guest/4.1.12/build/vboxsf
880K /var/lib/dpkg/info/linux-headers-3.5.0-18.list
864K /var/lib/dpkg/info/linux-headers-3.2.0-57.list
# du -ha /var | sort -n -r | head -n 10
The result from this command on my system was:
1004K /var/www/user_journeys/manual_cut_out_list.docx
996K /var/lib/dpkg/info/openjdk-7-doc.list
980K /var/lib/dkms/open-vm-tools/2011.12.20/build/vmxnet
976K /var/lib/dkms/open-vm-tools/2011.12.20/build/vmci/common
968K /var/log/auth.log.1
920K /var/lib/dkms/open-vm-tools/2011.12.20/build/vmsync
912K /var/lib/gconf/defaults/%gconf-tree.xml
884K /var/lib/dkms/virtualbox-guest/4.1.12/build/vboxsf
880K /var/lib/dpkg/info/linux-headers-3.5.0-18.list
864K /var/lib/dpkg/info/linux-headers-3.2.0-57.list
December 18, 2013
And I quote:
Thus, for example, read “munger(1)” as “the ‘munger’ program, which willThe Art of Unix Programming.
be documented in section 1 (user tools) of the Unix manual pages, if it’s present on your system”. Section 2 is C system calls, section 3 is C library calls, section 5 is file formats and protocols, section 8 is system administration tools. Other sections vary among Unixes but are not cited in this book. For more, type man 1 man at your Unix shell prompt.
System informaion
When working on Debian based distributions, and most likely other distribution types, you some times need to get system information, here's a bunch commands to get some info:
lsb_release -a
No LSB modules are available.
Distributor ID: LinuxMint
Description: Linux Mint 13 Maya
Release: 13
Codename: maya
uname -a
Linux SorteSlyngel 3.2.0-31-generic-pae #50-Ubuntu SMP Fri Sep 7 16:39:45 UTC 2012 i686 i686 i386 GNU/Linux
apt-cache policy aptitude
aptitude:
Installed: 0.6.6-1ubuntu1.2
Candidate: 0.6.6-1ubuntu1.2
Version table:
*** 0.6.6-1ubuntu1.2 0
500 http://archive.ubuntu.com/ubuntu/ precise-updates/main i386 Packages
100 /var/lib/dpkg/status
0.6.6-1ubuntu1 0
500 http://archive.ubuntu.com/ubuntu/ precise/main i386 Packages
apt-cache policy apt
apt:
Installed: 0.8.16~exp12ubuntu10.16
Candidate: 0.8.16~exp12ubuntu10.16
Version table:
*** 0.8.16~exp12ubuntu10.16 0
500 http://archive.ubuntu.com/ubuntu/ precise-updates/main i386 Packages
100 /var/lib/dpkg/status
0.8.16~exp12ubuntu10.10 0
500 http://security.ubuntu.com/ubuntu/ precise-security/main i386 Packages
0.8.16~exp12ubuntu10 0
500 http://archive.ubuntu.com/ubuntu/ precise/main i386 Packages
apt-cache policy python-apt
python-apt:
Installed: 0.8.3ubuntu7.1
Candidate: 0.8.3ubuntu7.1
Version table:
*** 0.8.3ubuntu7.1 0
500 http://archive.ubuntu.com/ubuntu/ precise-updates/main i386 Packages
100 /var/lib/dpkg/status
0.8.3ubuntu7 0
500 http://archive.ubuntu.com/ubuntu/ precise/main i386 Packages
dpkg -S /bin/ls
A useful trick for finding missing stuff, I had a missing file when upgrading from mint 12 to 13, the above command solved that issue. To get som info on the package founddpkg -s grub-common
Short post about Debian Administrators Handbook this should be read by all Debian based systems users to understand the aptitude commands.
December 17, 2013
How many hours did you do on this or that project this year?
Yikes, its getting close to the end of the year and I just know somewhere someone is lurking in the shadows to be checking my daily hour registrations. Unfortunately, this year the tool used to post hours in is the *agile* tool redmine.
This tool is of course not compatible with the company hour registrations, so I better create a report by doing some cmdline stuff. Luckily, redmine can export timespend to a csv file this i what I did.
Once the file is ready the following cmdlines can be of assistance for analysing the timesheet data
1. Find All hours spend in 2013 and dump it to a new file
grep 2013 timelog.csv > time_2013_me.csv
2. Find all projects worked on in 2013
cut -d ';' -f 1 time_2013_me.csv | sort | uniq
3. Find total amount of reported hours
cut -d ';' -f 7 time_2013_me.csv | awk '{total = total + $1}END{print total}'
4. Find all hours on a specific project
grep <project name> time_2013_me.csv | cut -d ';' -f 7 | awk '{total = total + $1}END{print total}'
5. Find alle de andre end specifikt projekt:
grep -v <project name> time_2013_me.csv | cut -d ';' -f 7 | awk '{total = total + $1}END{print total}'
This tool is of course not compatible with the company hour registrations, so I better create a report by doing some cmdline stuff. Luckily, redmine can export timespend to a csv file this i what I did.
Once the file is ready the following cmdlines can be of assistance for analysing the timesheet data
1. Find All hours spend in 2013 and dump it to a new file
grep 2013 timelog.csv > time_2013_me.csv
2. Find all projects worked on in 2013
cut -d ';' -f 1 time_2013_me.csv | sort | uniq
3. Find total amount of reported hours
cut -d ';' -f 7 time_2013_me.csv | awk '{total = total + $1}END{print total}'
4. Find all hours on a specific project
grep <project name> time_2013_me.csv | cut -d ';' -f 7 | awk '{total = total + $1}END{print total}'
5. Find alle de andre end specifikt projekt:
grep -v <project name> time_2013_me.csv | cut -d ';' -f 7 | awk '{total = total + $1}END{print total}'
That's it and that's that.
December 12, 2013
Updating Linux mint ... Ubuntu style!
I have a friend who still hasen updated his machine from Linux mint 12 :O So I promised to create a guide on how this could be done. I simulated his system on a virtual machine.
Edit: This link contain a description of the recommended Mint update method using the backup tool. You should consider using this method, as the one I'm explaining in this post is potentially dangerous (no pain no gain!).
The following is a guide on how to create rolling updates on Linux mint. The trick used is to change the repository packages from one mint version to the other as explained here.
Edit: This link contain a description of the recommended Mint update method using the backup tool. You should consider using this method, as the one I'm explaining in this post is potentially dangerous (no pain no gain!).
The following is a guide on how to create rolling updates on Linux mint. The trick used is to change the repository packages from one mint version to the other as explained here.
Every mint distribution, and Ubuntu distribution for that matter, has a distribution name attached. Linux mint12 is called Lisa, mint13 is called Maya etc. All names for Linux mint versions can be found at: old releases and for Ubuntu at: releases
The dist-upgrade trick is to find your current Linux mint release, if you don’t know the name already, it can be found by issuing:
grep mint /etc/apt/sources.list
grep mint /etc/apt/sources.list
deb http://packages.linuxmint.com/ lisa main upstream import
The distribution name is preset at the first space after the URL. To get the Ubuntu name replace mint in the grep expression, the list is a bit longer, but the name is still present in the same place. The Ubuntu version is oneiric
You can follow this approach when upgrading all your mint installations, but you should only go one increment at a time.
You can follow this approach when upgrading all your mint installations, but you should only go one increment at a time.
The best approach for updating is to install the system you’d like to update in a virtual machine, and then apply the update to that system to see what actually happens. Using this approach may seem somewhat overkill, but it is likely to save you a lot of work when trying to fix your broken installation later.
Before you begin you should know that the mint-search-addon package cannot be resolved.
Before you begin you should know that the mint-search-addon package cannot be resolved.
sudo aptitude remove --purge mint-search-addon
If you do not have a log in manager, i.e. mdm you should install this and log out and back in to check that the new manager works flawlessly.
sudo aptitude install mdm
Installing and configuring mdm should ensure that you’re not logging into X using the start X script, as this will most likely break your x log in after the dist upgrade. The Ubuntu x script will be replacing the mint x script, leaving you at an old school log in shell.
sudo aptitude upgrade && sudo aptitude clean
sudo aptitude upgrade && sudo aptitude clean
Once the updates has run reboot to ensure that any unknown dependencies are set straight prior to the actual dist upgrade.
sudo reboot
sudo reboot
When the system is back in business it’s time to start the actual dist upgrade. We’ll be doing the upgrade Ubuntu style. Issue (Yes I know it a long command):
sudo cp /etc/apt/sources.list /etc/apt/sources.list.backup && sudo perl -p -i -e "s/lisa/maya/g" /etc/apt/sources.list && sudo perl -p -i -e "s/oneiric/precise/g" /etc/apt/sources.list && sudo apt-get update && sudo apt-get dist-upgrade && sudo apt-get autoremove && sudo apt-get autoclean
sudo cp /etc/apt/sources.list /etc/apt/sources.list.backup && sudo perl -p -i -e "s/lisa/maya/g" /etc/apt/sources.list && sudo perl -p -i -e "s/oneiric/precise/g" /etc/apt/sources.list && sudo apt-get update && sudo apt-get dist-upgrade && sudo apt-get autoremove && sudo apt-get autoclean
Notice that the dist upgrade is done using apt-get and not aptitude, this is recommended by Ubuntu, so I’ll use it here but that is the only place.
You should follow the dist upgrade closely, as there will be issues you’ll need to resolve during the dist upgrade. Issues with respect to your current machine’s configuration.
BEWARE: You should not blindly accept changes to your existing /etc/sudoers file! That is the rationale why -y has not been added to the dist-upgrade command. If you are in doubt select the package maintainers version, I usually do this on any update, your old version will be placed by the dpkg program so you’ll always be able to find your old configuration.
With the new packages installed, cross your fingers, hope for the best, and reboot.
sudo reboot
sudo reboot
If you get a black screen when you reboot your (virtual) machine, it is most likely due to the graphical support has failed. Try hitting ctrl+f1 to get a old school log in prompt. Log in, then do the upgrade in the terminal until there’s nothing left.
sudo aptitude upgrade
sudo aptitude upgrade
My upgrade stopped at the mint-search-addon package for Firefox, but you have removed this package so it should not cause any problems. If it does anyway run the remove command from above again.
Once all the upgrades have run you’re ready for starting X to get a feel of what the problems with going graphical may be. In my case it was a missing setting for the window manager, I had only lightmd installed which attempted to start an Ubuntu session :(
Once all the upgrades have run you’re ready for starting X to get a feel of what the problems with going graphical may be. In my case it was a missing setting for the window manager, I had only lightmd installed which attempted to start an Ubuntu session :(
Use the mdm window manager to get things up an running, if you get the missing file /usr/share/mdm/themes/linuxmint/linuxmint.xml dialog it means you’re missing the mint-mdm-themes package so issuing:
sudo aptitude install mdm mint-mdm-themes mint-backgrounds-maya mint-backgrounds-maya-extra
sudo aptitude install mdm mint-mdm-themes mint-backgrounds-maya mint-backgrounds-maya-extra
sudo service mdm start
Should bring you to a graphical log in prompt. All you have to do now is resolve the over dependencies that you may have.
My new kernel should not be reconfigured because a link was missing in grub. /usr/share/grub/grub-mkconfig_lib be either a symbolic link or a copy. Setting the symbolic link to /usr/lib/grub/grub-mkconfig_lib can be fixed by:
cd /usr/share/grub/
cd /usr/share/grub/
ln -s grub-mkconfig_lib /usr/lib/grub/grub-mkconfig_lib
then
dpkg --configure -a
dpkg --configure -a
Fixed the missing configurable parts for the new kernel dependencies. Now, the kernel image from the old Linux mint12 system must be removed, because it interrupts aptitude and dpkg. The old kernel is removed by:
sudo aptitude remove linux-image-3.0.0-32-generic
sudo aptitude remove linux-image-3.0.0-32-generic
Finally, reboot. Wait. Then, once the system is up again, you should be, you’ll need to activate the new repository sources, since dist-upgrade only sets packages to the distribution packages. It does not automatically choose the latest versions of those packages, issue:
sudo aptitude update
sudo aptitude update
sudo aptitude upgrade
Voila, you should be good to go. Some indies may not be hit, some packages may not be installed check the update status from aptitude update there’s a [-xx] counter in the bottom of the terminal, where - means there’s still stuff you could upgrade. Now, if you're up for it, you should try to update to Linux mint 14 (nadia) from this platform ;)
Labels:
aptitude,
Linux mint lisa,
linux mint maya,
mint update
December 10, 2013
Fortune cookies
Fortune cookies, the only computer cookies worth saving. -- John Dideriksen
And there I was blogging about setting up a apt repository on you website, when suddenly, creating a fortune cookie package struck me. To use the fortunes you'll need fortunes installed, simply issue:
#sudo aptitude install fortunes
Fortune cookies? But how do you get those into your machine, and how do you handle the crumb issues?? Do you install a mouse?
That awkward moment when someone says something so stupid,
all you can do is stare in disbelief.
-- INTJ
Fortune cookies are deeply explained right here, too deeply some may say, but I like it. How you bake your own flavored cookies is a completely different question. But, the recipe is extremely simple. Once you have your test file ready, it should look like this, according to the ancient secret recipe.
Fortune cookies, the only computer cookies worth saving. -- John Dideriksen
%
That awkward moment when someone says something so stupid,
all you can do is stare in disbelief.
-- INTJ
%
I've found him, I got Jesus in the trunk.
-- George Carlin
Now all you have to do is to install your fortunes using strfile issue:
#sudo cp quotes /usr/share/fortunes
#sudo strfile /usr/share/fortunes/quotes
And then
#fortune quotes
Should hit one of your freshly baked fortune cookies. Putting these freshly baked babies in a jar, in form of a Debian package, is just as easy. Simply create a Debian package following the recipe here.
November 29, 2013
Making files
Make files, phew!? Admitted I previously used the shell approach by adding mostly bash commands in the files I made. However, I somehow felt I had to change that, since making make files generic and not utilising the full potential of make if both hard and rather stupid.
I have a repository setup for Debian packages, these packages are of two types, some are simple configuration packages and others are Debian packages made from installation ready 3rd party tarballs. I aim at wrapping these 3rd part packages into Debian packages for easy usage for all my repository users.
The setup for creating the packages are a top level directory called packages, which contain the source for each of the individual packages. These could be Altera, a package that contain a setup for the Altera suite or Slickedit, a package containing the setup for Slickedit used in our users development environment.
The common rule set for all the individual package directories are:
1. Each directory must be named after the package i.e Slickedit package is in a directory called Slickedit
2. Each directory must contain a Makefile
3. Each directory must contain a readme.asciidoc file (documentation x3)
5. Each directory must contain a tar.gz file with the source package
4. Each directory must contain at least one source directory appended with -package
The above rules gives the following setup for the Slickedit package:
#tree -L 1 slickedit/
slickedit/
├── Makefile
├── readme.asciidoc
├── slickedit-package
└── slickedit.tar.gz
1 directory, 3 files
The package make file targets from the system that utilizes the package build routine are all, clean, distclean. all extract and builds the Debian package, clean removes the Debian package and distclean removes the extracted contents of the -package directory.
Furthermore, the make file contain three targets named Package, pack and unpack where package builds the debian package from the -package directory, pack creates a tarball for the -package directory in case there are changes to the package source, unpack extract the tarball into the -package directory.
Make file for the packages:
DEBCNTL = DEBIAN
TARFLAGS = -zvf
EXCLUDES = --exclude $(DEBCNTL)
# These are the variales used to setup the various targets
DIRS = $(subst /,,$(shell ls -d */))
PKGS = $(shell ls *.tar.gz)
CLEAN_DIRS:=$(subst package,package/*/,$(DIRS))
DEBS:= $(DIRS:%-package=%.deb)
TGZS:= $(DIRS:%-package=%.tar.gz)
# These are the targets provided by the build system
.PHONY: $(DIRS)
all: unpack package
package: $(DEBS)
pack: $(TGZS)
unpack:
find . -name '*.tar.gz' -type f -exec tar -x $(TARFLAGS) "{}" \;
clean:
rm -vf $(DEBS)
distclean: clean
ls -d $(CLEAN_DIRS) | grep -v $(DEBCNTL) | xargs rm -fvR
# These are the stem rules that set the interdependencies
# between the various output types
$(DEBS): %.deb: %-package
$(TGZS): %.tar.gz: %-package
# Stem targets for generating the various outputs
# These are the commands that generate the output files
# .deb, .tar.gz and -package/ directories
%.deb: %-package
fakeroot dpkg-deb --build $< $@
%.tar.gz: %-package
tar -c $(TARFLAGS) $@ $< $(EXCLUDES)
The major pit fall I had when creating the make file was figuring out the rules for the .tar.gz .deb and
-package rules. The first two are straight forward to create using the % modifier, but when creating the
-package target I ran in to a circular dependency. Because the pack and unpack rules have targets that when defined using the static pattern rule are contradictory.
%.tar.gz: %-package
and the opposite
%-package: %.tar.gz
Caused the unpack target to execute both the extract and create tarball targets, leading to the error of a Debian file containing only the Debian directory, not much package there. Since the aim was a generic make file, one to be used for all packages, I ended up using the find command to find and extract all tarballs. I figures this was the easiest approach since using the static pattern rules didn't work as intended.
I have a repository setup for Debian packages, these packages are of two types, some are simple configuration packages and others are Debian packages made from installation ready 3rd party tarballs. I aim at wrapping these 3rd part packages into Debian packages for easy usage for all my repository users.
The setup for creating the packages are a top level directory called packages, which contain the source for each of the individual packages. These could be Altera, a package that contain a setup for the Altera suite or Slickedit, a package containing the setup for Slickedit used in our users development environment.
The common rule set for all the individual package directories are:
1. Each directory must be named after the package i.e Slickedit package is in a directory called Slickedit
2. Each directory must contain a Makefile
3. Each directory must contain a readme.asciidoc file (documentation x3)
5. Each directory must contain a tar.gz file with the source package
4. Each directory must contain at least one source directory appended with -package
The above rules gives the following setup for the Slickedit package:
#tree -L 1 slickedit/
slickedit/
├── Makefile
├── readme.asciidoc
├── slickedit-package
└── slickedit.tar.gz
1 directory, 3 files
The package make file targets from the system that utilizes the package build routine are all, clean, distclean. all extract and builds the Debian package, clean removes the Debian package and distclean removes the extracted contents of the -package directory.
Furthermore, the make file contain three targets named Package, pack and unpack where package builds the debian package from the -package directory, pack creates a tarball for the -package directory in case there are changes to the package source, unpack extract the tarball into the -package directory.
Make file for the packages:
DEBCNTL = DEBIAN
TARFLAGS = -zvf
EXCLUDES = --exclude $(DEBCNTL)
# These are the variales used to setup the various targets
DIRS = $(subst /,,$(shell ls -d */))
PKGS = $(shell ls *.tar.gz)
CLEAN_DIRS:=$(subst package,package/*/,$(DIRS))
DEBS:= $(DIRS:%-package=%.deb)
TGZS:= $(DIRS:%-package=%.tar.gz)
# These are the targets provided by the build system
.PHONY: $(DIRS)
all: unpack package
package: $(DEBS)
pack: $(TGZS)
unpack:
find . -name '*.tar.gz' -type f -exec tar -x $(TARFLAGS) "{}" \;
clean:
rm -vf $(DEBS)
distclean: clean
ls -d $(CLEAN_DIRS) | grep -v $(DEBCNTL) | xargs rm -fvR
# These are the stem rules that set the interdependencies
# between the various output types
$(DEBS): %.deb: %-package
$(TGZS): %.tar.gz: %-package
# Stem targets for generating the various outputs
# These are the commands that generate the output files
# .deb, .tar.gz and -package/ directories
%.deb: %-package
fakeroot dpkg-deb --build $< $@
%.tar.gz: %-package
tar -c $(TARFLAGS) $@ $< $(EXCLUDES)
The major pit fall I had when creating the make file was figuring out the rules for the .tar.gz .deb and
-package rules. The first two are straight forward to create using the % modifier, but when creating the
-package target I ran in to a circular dependency. Because the pack and unpack rules have targets that when defined using the static pattern rule are contradictory.
%.tar.gz: %-package
and the opposite
%-package: %.tar.gz
Caused the unpack target to execute both the extract and create tarball targets, leading to the error of a Debian file containing only the Debian directory, not much package there. Since the aim was a generic make file, one to be used for all packages, I ended up using the find command to find and extract all tarballs. I figures this was the easiest approach since using the static pattern rules didn't work as intended.
Labels:
deb. packages,
Make,
Makefiles,
rerepro
Subscribe to:
Posts (Atom)