Gdbm for Windows

This is a windows-port of the famous GNU dbm (version 1.8.3) to Windows-OS. GNU dbm is a simple extendible hashing-mechanism to store key/value-pairs in files.

The official gdbm-website can be found at the GNU website.
You can download the original-distribution (

Because the original-distribution is only available for Unix-like systems, I decided to port it to Windows. The windows-port contains a prebuilt dll/lib (build with Visual C++ 6.0 on Windows 2000) and some sample executables. To be sure you can built the dll yourself, the sources and makefiles (for Visual C++ and Mingw) is included in the win32-subdirectory.

For instructions on how to build gdbm for windows have a look at the

The primary goal for me to port gdbm to windows was to get access to the gdbm-hash-functions via Tcl. Therefore I create Tgdbm (Tcl gdbm) which can also be downloaded from this website.




Wheee. Though this stuff ist quite old. It is used (or at least linked to) in Windows PC Utilities for Maemo Mapper for Nokia Tablet


Added Makefile.cygwin (thanks to Alejandro Lopez-Valencia) to build gdbm on windows within the latest Cygwin-environment.

16. Dec. 2003

Version 1.8.3: Adapted to the new gdbm-version 1.8.3

24. Sep. 2002

Version 1.8.0c: Bugfix (thanks to Nakaue Takumi) in gdbm.h. (endif of extern “C” in wrong line).

04. Apr. 2001

Version 1.8.0b: Bugfix (thanks to Oleg Oleinick) in gdbm_reorganize (Corrected window-filename-parsing).

01. Apr. 2001

Version 1.8.0: Initial version

WebDAV for TclHttpd

This is a simple WebDAV-extension to let TclHttpd work as a WebDAV-server.


  • August 04 – WebDAV for TclHttpd is program of the week (POTW). Thanks a lot!
  • July 04 – Released version 0.1a


This is a simple WebDAV-extension which could be used in conjunction with the TclHttpd-Webserver to built a WebDAV-Server.

You have embedded TclHttpd in your application to provide web-access? Why not additionally add WebDAV-support as well?

Because of this being considered as a prototype you should be warned (this is alpha-status).


  • WebDAV-Support for files in the webserver-docroot
  • WebDAV-Support for vfs-mountable files (such as starkit-, zip- or tar-files)
  • Quite easy to extend
  • Is tested with:

So what is that? With the provided tcl-files (see Download) you put into your custom-directory you can turn your TclHttpd-Webserver into a WebDAV-Server. You can e.g. expose vfs-files to be accessed via WebDAV. Suppose you have a zip-file. Put that in your docroot and you can browse through the zip-file with a WebDAV-Client (like Internet Explorer or Konquerer).

Maybe you have some starkits, simply connect a Web folder with your WebDAV-server and browser through the contents of the starkit (see image below).

Though this implementation is quite useful you should be aware of several


  • No support for locking
  • No support for authentification
  • No support for versioning (so it’s kind of WebDA ;-)
  • No threading-support (at least: I haven’t tested it)
  • Testing is very incomplete (COPY of collections??)
  • depth-header is mostly ignored
  • many empty spaces in the implementation

Keep in mind, that this was just a fun-project for me and no serious WebDAV-implementation. One more drawback is the mapping of path-prefixes to the Url-Prefix-Handler. Maybe I will change that later.


WebDAV for TclHttpd needs the following modules:

If you get the latest Tcl-Distribution everything should be in place (besides TclHttpd).


To make your TclHttpd WebDAV-aware simply follow these steps:

  • Download
  • Extract the zip-file into the ‘custom’-directory of your TclHttpd-installation
  • Create a directory in the ‘htdocs’-directory that should provide WebDAV-access (e.g. dav)
  • put some files in that directory
  • Create another directory (let’s say kit) in which you may place starkit-, zip- or tar-files (be sure to use the extensions .kit, .tar or .zip resp. in all other cases the files will not be recognized (see webdav_vfs.tcl).
  • Adapt webdav.conf to your needs (if you follow this example and use the given directory-names, everything should work with the provided webdav.conf

That’s it.

Now you can use your Internet Explorer and create/open a Web folder with this Url:

 http://your-server:your-port/kit/  (or /dav/)

If you have a standard installation of TclHttpd use: http://localhost:8015/dav/

Remember to check “Open as webfolder”.
Depending on the content in that directory you could see something like this:


Screenshot of WebDAV, seen from Windows Explorer

This image shows a mapped Web folder in the usual Windows-Explorer. Currently I just browsed inside the
file-structure of patience.kit (an implementation of the patience-card-game which I just had lying around, you can download the starkit from [1]).


To configure WebDAV have a look at webdav.conf. The content is explained a little bit more detailled in the next chapter.

Extend WebDAV

Maybe you even want to extend WebDAV for other purposes. Simply create a file like webdav_mymodule.tcl inside the TclHttpd-custom-directory. Add the following lines to webdav.conf.

webdav_resource ''/your-path'' {
  filename custom/webdav_mymodule.tcl
  namespace my::webdav::module

When TclHttpd starts it reads the custom-directory-files. Sourcing webdav.tcl results in evaluating webdav.conf. All resources listed there would be read in as well.
The function webdav_resource takes 2 parameters. The first one is the absolute path in your htdocs (this is used to install the handler with Url_PrefixInstall) The second parameter contains a list of key-value-pairs. filename and namespace must be given to let webdav automatically source in the files and call the corresponding functions. More key-value-pairs can be entered and could be used by your own module.

Your module has to support a few functions (to get a working WebDAV-functionality). These are:

  • your_namespace::GET
  • your_namespace::PROPFIND

Furthermore you can implement all functions which are needed by WebDAV according to rfc2518.



You want to give it a try but don’t have a WebDAV-client? Why not use webdav_vfs from tclvfs?

Assume you have put a starkit.kit inside htdocs/kit. Open a tcl-shell and then mount the resource (in this example an url to the webdav_vfs-handler):

package require vfs
vfs::webdav::Mount http://localhost:8015/kit/ kit
cd kit
glob *  ;# yields a list of file-names in the kit-directory
# assume you have a starkit there (e.g. patience.kit, see image above)
cd patience.kit
# you're now inside the starkit-file
glob *  ;# return a list of files/directories inside the starkit

You can do so with zip- or tar-files as well. Isn’t that crazy? You access
zip-files via WebDAV. I’m quite astonished about this :-) and I’m having a lot of fun with it.

Be warned. The webdav-client implementation needs at least as much work as this webdav-server implementation :-)


29. Jul. 2004

Released version 0.1a (damn IE is such a nit-pick DAV-client). Removed property lastaccessed!

28. Jul. 2004

Released initial version (0.1)



This software is copyright © 2004-2010 by Stefan Vogel.

This software is released under GPL.

Tgdbm library for Tcl (Version 0.5)

Tgdbm is an easy-to-use Tcl-wrapper to the GNU-dbm-library (gdbm).


Tgdbm provides an easy to understand interface to the
GNU-dbm-library (gdbm).
Gdbm uses extendible hashing and stores key/value pairs, where each key must be unique (Gdbm can be downloaded at the GNU-Website, there
is also my windows-port of gdbm).
Though gdbm provides compatibility for ndbm and dbm only the gdbm-commands are supported in Tgdbm.

Furthermore you can use Tgdbm for transparently accessing and storing tcl-arrays (persistant arrays). An array is attached to the gdbm-file as a handle. With this you can set an array-entry which is stored or updated transparently in the corresponding gdbm-file.

Tgdbm is meant to be used in Tcl-Applications which has to store some small/medium amount of data. In many cases there is not enough data to be stored, so that a “real” database is justified.
But though there is only a small amount of data, you often have to use an efficient way to store them (not just write them in a plain text-file).

Because Tgdbm is provided as a loadable Tcl-Module, it can be easily
integrated into any Tcl-Application.


You can download Tgdbm with the following links:


14. April 2005

Released Version 0.5:

Yes it’s still 0.5 but has some improvements. All those fixes were sent to me from Thomas Maeder (thanks a lot). Have a look at the file CHANGES.txt inside the distribution to see what happened.

9. Jan. 2004

Released Version 0.5:
Persistant arrays were added to Tgdbm. Because of a nearly equivalent concept for tcl-arrays (which have unique keys) and gdbm-key-value pairs which also have unique keys, these are now combined to have a transparent handling of persistant arrays.
You can simply attach an array-name to a gdbm-file. Afterwards every operation on the array (read/write/unset) is traced and the key/values are automatically fetched/stored or updated/deleted in/from the gdbm-file.
For further information see README.txt.

Cleanup and restructuring of the C-Code, added sync-command …

1. Feb. 2000

Released Version 0.4

A quick and simple example

Even though the Tgdbm-commands should be easy enough (if you know the gdbm-library) a few examples should help to start immediately.

package require tgdbm
proc store_array {file data} {
    upvar $data dat
    # create file if it doesn't exist
    set gdbm [gdbm_open -wrcreat $file]
    foreach entry [array names dat] {
        $gdbm -replace store $entry $dat($entry)
    $gdbm close

You can also try the file tests/demo.tcl which implements a simple gdbm-file-viewer. This viewer stores it’s configuration-options (like colors or window-positions) in option.gdbm (like an INI-file).
Gdbm-viewer needs the tablelist-widget from Dr. Casa Nemethi (which can be obtained from:

More examples.


More documentation

Web Scraping

Part I

As I was searching through the web to find something useful concerning “web scraping”, I was astonished about the lack of information. So I decided to put up something myself. Isn’t there anything useful out there? I know “web scraping” (or “screen scraping” in general) is a disgusting technique and I have to admit: it usually makes me puke.

But, well, there are times, when you have no other chance (or even worse: you have a chance but that one is even more horrible).

After doing several web scraping-projects I will put together some of the experience. The following examples will be shown in PHP and Tcl (version > 8.4.2 and tdom 0.8). But as far as I know, other languages could easily used with similar techniques (Ruby for example).

But first of all a …


Before starting to scrape something off the web, be sure there is no better way. Often you may find an official API that should be used (e.g through Web Services or a REST-API) or there are other services that deliver the needed information.

And moreover convince yourself that web scraping is at least not forbidden. Some big sites state in their terms and conditions that scraping is not allowed. You should respect that. And furthermore be aware that your requests add to the load of the target site. Always keep in mind, that you are retrieving information in a way that’s surely not intended by the sites-owners and -operators. So be nice and don’t make too much requests.

If you’re taking content from other sites without the permission of the creators you will, depending on the usage of this content, violate copyright law.

Having said that, we start with the simplest method.

Regular expressions

That’s always the first method mentioned, when somebody speaks of analyzing texts (and “analyzing text” is in general what you do when you scrape a website). Though this might be feasible for grabbing specialized texts from a page, you get in hell if you want more.

So let’s look at a small example where a regular expression is enough. We want to extract the current value of the DAX.
There is certainly some webservice to retrieve this kind of data. But as I wanted to make a really simple example, let’s assume there is no way around scraping.

Have a look at any financial-site and you will find some HTML similar to that:

HTML-Code 1

We are concentration our attention to the table with the row “DAX” and the column “Punkte”.
To extract the DAX-value, this could be done simply by

if (preg_match_all($regexp, $html, $hit) && count($hit[1]) == 1) {
    print 'Found DAX: '.$hit[1][0];
} else {
    print 'Error! Retrieved '.count($hit[1]).' matches!';
PHP-Code 1

Or if you prefer to write that in Tcl:

set f [open boerse.html r]; set html [read $f]; close $f
// or
package require http
set token [::http::geturl ""]
set html [::http::data $token]
set regexp ">DAX.*?(.*?)"
// -all -inline counts complete match and braced expression
if {[set l [regexp -all -inline $regexp $html]] && [llength $l] == 2} {
    puts "Found DAX: [lindex $l 1]"
} else {
    puts "Error! Retrieved [llength $l] matches"
Tcl-Code 1

To have a better way of testing, I’m usually storing the page locally. With file_get_contents you can simply switch from the local stored file to the web-address (as far as I know there is nothing that easy in Tcl to switch between file and url). As long as you’re trying to find out the correct regular-expression for the match, you should definitely do that with a locally stored HTML-file.

Make sure that this pattern only matches once or you might retrieve the wrong part of the page. To do so, the regular expression pattern contains a little bit of the surrounding tags. Assuming that there will only be one linked text “DAX” in a table-cell, with the next cell containing a number.

Further in PHP add the modifier /s (treat string as single-line) to the regular expression (or in Tcl the switch -inline). Because the text to match stretches multiple lines (see “HTML-Code 1″) and I simply wanted to ignore that.

Because of unexpected and surely unannounced changes to the page (at least unannounced to you as an ”nearly” anonymous scraper), make sure that you check for the right data. If the pattern doesn’t match, there is definitely something wrong and you have to look at the HTML-Code for changes. Or maybe the pattern matches more than once, this should be wrong, too. Therefore I’m always using preg_match_all (or in Tcl -all).

Well, this was easy and in fact I wouldn’t call this “web scraping”. If you want more to scrape than a single number or word from a page, forget about regular expressions.

We need something more powerful. Something which can be used on nested structures. Have you ever tried to match paired


" with regular expressions? No way! Go directly to jail! Do not pass go!

Part II

A more powerful way than regular expressions? Nearly imaginable? Small mind!


DOM is for correctly structured XML-like data only? Oh no. There is more. At least in PHP you can use the usual DOMDocument. And as far as I know even “Internet Explorer” somehow handles badly formatted HTML. And it is using a
DOM-representation internaly. So there are other “convert bad-bad-bad html to dom”-tools out there.

Let’s start with another simple example. We want to find out how long a search on google takes.

First we have to feed the HTML into the DOMDocument (let’s search for “scraping”). To get the url just go to the website, enter “scraping” and copy the resulting url to the code.

// create DOM from bad HTML
$dom = DOMDocument::loadHTML($html);
if ($dom) {
    // go on with parsing
PHP-Code 2
package require tdom
package require http
set url ""
set token [::http::geturl $url]
set html [::http::data $token]
# create DOM from bad HTML
if {![catch {dom parse -html $html} dom]} {
    set root [$dom documentElement]
    # go on with parsing
Tcl-Code 2

You will get tons of warnings from the method loadHTML. As we know that this is badly formatted HTML, we will silently ignore those.

If we got a dom-object we’re starting to parse the HTML. We’re doing this with XQuery. After analyzing the HTML-code of the result-page you can find this specific text (newlines inserted for clearness):

HTML-Code 2

Search for the duration of the search, we simply have to get the div-tag with id resultStats. And below that the nobr-tag.

$xpath = new domXPath($dom);
// get the div-tag with id=resultStats
$queryTime   = '//div[@id='resultStats']/nobr';
$nodeTimeList = $xpath->query($queryTime);
if ($nodeTimeList && $nodeTimeList->length == 1) {
    print 'Query took: '.$nodeTimeList->item(0)->nodeValue;
    // further queries ... see below
} else {
    // something went wrong, do some error-management
PHP-Code 3

In Tcl this looks like this:

if {![catch {$root selectNodes {//div[@id='resultStats']/nobr}} nodeTimeList]
    && [llength $nodeTimeList] == 1} {
    puts "Query took: [[$nodeTimeList firstChild] nodeValue]"
    # further queries ... see below
} else {
    # something went wrong, do errorhandling
Tcl-Code 3

With the XQuery //div[@id='resultStats']/nobr we get all the nobr-tags that are below a div-tag with the id-attribute resultStats.

And because it is an id it really should be only one. But you never know. The search might give no results. In that case we wouldn’t get a node-list-object, so we check for the existance and that there is exactly one element ($nodeTimeList->length == 1). You should always completely check your results that they exactly meet your expectations.

If the search doesn’t return results you should think of some error-handling.

You will ask yourself: “Why haven’t we used the method getElementById?” This would return the node directly. But have a close look to this method. As mentioned in the
documentation, you have to call validate() before. You won’t expect that HTML-rubbish could be validated, do you?

Now let’s print the search results.

Looking through the html-code we find (newlines inserted for clearness):


By now we would come to complex parsing with regular expressions, with XQuery we simple ask for this nodes:
//div[@id='results']//h3. The script would look like this:

$nodeHitList = $xpath->query("//div[@id='results']//h3/a");
foreach ($nodeHitList as $node) {
    print $node->nodeValue;
foreach node [$root selectNodes {//div[@id='results']//h3/a}] {
    puts [$node asText]

Could it be shorter and cleaner? I guess no. Maybe we could again add some error-checking? I will leave this as an excercise to you. ;-)

Some word about User-agent

The way I retrieve the pages in the example is surely most simple. When using file_get_contents PHP doesn’t send a useragent-string within the request. Retrieving the url in Tcl with geturl sends the useragent “Tcl http client package “. In Tcl you can simply configure another useragent with

::http::config -useragent "lala"

In PHP you have to use a full-blown http-reader like HTTP_Request if you want to do more fancy things like setting the useragent or retrieving the pages through a proxy.

Setting the useragent might be necessary because of the target-page checking against the used browser and retrieving the page as “tcl client” might not be the most used “browser” :-).

But as stated in the warning at the beginning, you should be honest and friendly toward the scraped site and identifying yourself as a “scraper” is one way to do that.


If I’ve got some time I will add some chapters concerning sessions (e.g if you like to get your bank-balance automatically) and ssl and maybe even some warnings about javascript.

But for the time being I leave it as is. Unless someone wants to improve this pigeon-english (I’m always glad if someone corrects me, please don’t hesitate to mail me all errors).


As said in the beginning, there is not much information around for this subject.

Professional screen-scraping software:


Metakit@Web is a web-based admin-interface for metakit-files

Metakit@Web is a web-administration-interface for metakit-files and works with Tclhttpd. It’s like the well known phpMyAdmin but only for metakit-database-files and only suited for the real tcl-webserver.


  • June 04 – Metakit@Web (0.5) is now distributed as a sample-app in TclHttpd 3.5.1
  • April 04 – Jean-Claude Wippler has built a starkit of Metakit@Web (downloadable at – Thanks Jean-Claude.
  • February 04 – Released version 0.6


Metakit@Web is an administration-interface written for TclHttpd. It can be used to administrate metakit-database-files. I’m using it as a rapid-prototyping-tool to develop data-models and quickly store and edit data.

The features of Metakit@Web (version 0.6) in detail:

  • easy installation in TclHttpd (just extract the zip-file into your htdocs-directory)
  • easy creating, editing, manipulating Metakit-files (useful for rapid-prototyping)
  • easy browsing through content of Metakit-files
  • if you have stored images inside you metakit, you can easily look at them through the browser-interface
  • session-support (with row-locking) (not supported by 0.5)
  • htaccess-support (not supported by 0.5)
  • multi language-support (thanks to Miko le pépé) (not supported by 0.5)

There are now two different versions of Metakit@Web with two different techniques used.

  • version 0.5: consists of only one file and is implemented as a Direct_Url
  • version 0.6: is a htdocs-directory and allows you to have .htaccess files and sessions.

Version 0.5 can still be downloaded but I won’t do any more work on it.

Here is a screenshot of Metakit@Web (0.5/0.6):


Installation is quite simple (as always with Tclhttpd):

  • Download
  • Extract the zip-file into your htdocs-directory
  • (For version 0.5 you have to put the tcl-file int the Tclhttpd-custom-directory. For some version 3.4.2 you may have to patch your Tclhttpd so that custom-files are read. Be sure to remove version 0.5 if you want to install 0.6)
  • Create a directory where you want to place your metakit-files (e.g: /metakits or c:/metakits)
  • Adapt the directory-setting in .tml (e.g. set the array aConfig(databaseDir) to /metakits or c:/metakits). You will get prompted if the directory doesn’t exists and may configure it via the configuration-dialog.
    (in version 0.5 adapt the variable in mkweb_05.tcl)
  • Go to http://your.domain/mkatweb (or version 0.5 http://your.domain/mkweb)
  • Read the help which is shown in the content-frame.



6. Feb. 2004

Released version 0.6: Now released as zip-distribution.

4. Sep. 2003

Released version 0.5: The last “one-file”-version of Metakit@Web.

1. Sep. 2003

Initial version 0.4.