Plasma GitLab Archive
Projects Blog Knowledge


Ocamlnet 3 finally released

What's new in Ocamlnet 3 - by Gerd Stolpmann, 2010-09-01

So, finally it is there: Ocamlnet 3.0.0. After almost 3 years of development, many parts of Ocamlnet have been touched and extended while keeping most of the existing APIs. It is not immediately visible what the striking new features are, so a bit of explanation is necessary.

When renovating a building, it is common to do this floor by floor. In this sense, Ocamlnet 3.0.0 focused on the foundation and the first floor. Also, the renovation is not yet finished - many features still need to be added, like supporting SSL for more protocols. This is now easier thanks to some new basic APIs that have been introduced in the first step.


One of the parts that got most attention is Netsys, the library adding the missing links to the operating system (OS). One of the driving forces was the port to Win32. This lead to the introduction of generalized versions of and Unix.write calls (defined in Netsys):

val gread : fd_style -> Unix.file_descr -> string -> int -> int -> int
val gwrite : fd_style -> Unix.file_descr -> string -> int -> int -> int
For getting some Win32-specific emulations right, it is sometimes required to call other functions instead of and Unix.write, e.g. Netsys_win32.pipe_read and Netsys_win32.pipe_write. In order to avoid that such case distinctions are scattered over the whole library, the idea of defining these generic functions was born. In fd_style the user passes in how to handle the descriptor. Usually fd_style is automatically determined by another function get_fd_style (this requires a few system calls and is factored out because of this). Although targeting mostly at Win32, there are already some benefits for POSIX systems, e.g. the fd_style already encodes whether a descriptor is a socket, and whether it is connected, which is sometimes quite useful information. In the future, this system will be extended:
  • Seekable files are currently not well supported by the asynchronous I/O layer. The reason is that the select and poll system calls cannot predict whether I/O would be blocking or non-blocking (and thus always say non-blocking). This can be improved by using special AIO calls of the OS. Of course, files for which AIO is to be used need to be flagged specially, and a new fd_style could do so.

  • There are also some ideas for labeling SSL sockets by a special fd_style. This would make it a bit easier to support SSL thoughout the library. This is a bit more work than just calling and Ssl.write, though, because the SSL protocol allows renegotiations at any time, and a read may also require writes on the socket level, and vice versa.

Another new idea on the Netsys level is a little object definition called pollset:
class type pollset =
  method find : Unix.file_descr -> Netsys_posix.poll_req_events
  method add : Unix.file_descr -> Netsys_posix.poll_req_events -> unit
  method remove : Unix.file_descr -> unit
  method wait : float -> 
                ( Unix.file_descr * 
                  Netsys_posix.poll_req_events * 
                  Netsys_posix.poll_act_events ) list
  method dispose : unit -> unit
  method cancel_wait : bool -> unit
A pollset represents a set of file descriptor events one wants to poll. Again, this data structure was originally required for the Win32 port (because Win32 is very different in this respect), but there are also advantages for Unix systems. Nowadays, there are various improved APIs for polling such as Linux epoll or BSD kqueue. The pollset abstraction will make it very easy to support these - the user simply selects one of the advanced implementations of pollset, and thanks to dynamic binding of object methods it is automatically used everywhere. (One of the next versions of Ocamlnet will allow this.)

Another word about polling. The Ocaml runtime only provides select. Although not as bad as claimed by some people, it imposes artificial limitations, especially about the number of supported file descriptors. Because of this, Netsys_posix includes now a binding of the poll system call which is not suffering from this disease. Of course, poll is now the only poll API used throughout Ocamlnet (and, as noted, even better APIs will be supported in one of the next releases).

Other additions on the OS level for Unix systems:

  • Netsys_posix.spawn is a new way of starting subprograms, with special support for monitoring the subprocesses asynchronously
  • There are now bindings for syslog in Netsys_posix
  • The system calls fsync and fdatasync are supported
  • If the OS provides this call, fadvise can be invoked to control the page cache
  • There is also fallocate to allocate disk space, so far the OS provides it
  • POSIX semaphores are supported, so far the OS provides the complete interface (i.e. named semaphores for synchronization between unrelated processes)
  • There is a coordinator module for signals, Netsys_signal, so that various users of signals do not mutually override their handlers

For all systems, Netsys implements:

  • Wrappers for multicasting system calls on sockets
  • In Netsys_mem there is now special support for using bigarrays of chars as efficient I/O buffers. Such bigarray-backed buffers are called memory (reminding us to the fact that these buffers are not relocatable like strings, but bound to fixed memory addresses). There are functions for allocating page-aligned or cache-line-aligned memory buffers. Also, there is experimental support for copying Ocaml values into buffers (used by the Camlbox module, see below). Finally, there are also versions of read, write, recv and send operating on memory buffers rather than strings. These versions open the door to zero-copy network I/O (if supported by the OS).
  • For better support of multi-threading there is now a version of the thread API that even exists when the thread library is not linked in, so that especially critical sections are emulated as no-ops in the single-threaded case. It is hoped that more functions can be made thread-safe by this new feature (in Netsys_oothr).
  • The exception registry Netexn is now almost outdated, because the Ocaml standard library recently introduced a similar feature (yes, sometimes feature wishes are honoured :-).


As Netsys uses now pollsets to manage polling, Equeue had to be rewritten to take advantage of this. In particular, there is now Unixqueue_pollset which is a port of the old Unixqueue API around pollsets. For the user, there is absolutely no difference.

What's more important is the extension of the engine API. Ocamlnet 2 introduced engines as a way of expressing a suspended I/O possibility, but there was only limited support for it in the library. This has now changed - engines are now a first class member of Ocamlnet. In particular, there are now much more synchronization primitives (e.g. stream_seq_engine for executing an open number of engines in sequence, or msync_engine for waiting for the completion of multiple engines). This development was mostly driven by another project of mine: Plasma (see other blog articles on this site). Plasma uses engines for all kinds of concurrent execution of I/O code, and while I was developing Plasma, I extended the Ocamlnet engine API step by step.

There is also now a way to call RPC procedures with an engine: Rpc_proxy.ManagedClient.rpc_engine. This function has originally also been developed for the Plasma project.

For simpler I/O needs, I added Uq_io. It contains "engineered" versions of simple I/O functions like input, input_line or flush. Uq_io is not limited to file descriptors, but works also on top of a number of other I/O devices (including virtual ones).

The operators ++ and >> have been introduced as abbreviations for sequential execution, and result mapping of engines, respectively. For example, the synchronous code

let line1 = input_line ch_in in
let line2 = input_line ch_in in
output_string ch_out (line1 ^ line2 ^ "\n")
would now look in "engineered" code:
Uq_io.input_line_e d_in ++
  (fun line1 ->
    Uq_io.input_line_e d_in ++
      (fun line2 ->
        Uq_io.output_string_e d_out (line1 ^ line2 ^ "\n")
Not bad, if you compare with the previous solution (hand-coding a scanner for lines, writing the event handler routines, etc., adding up to 100-200 lines of code).


The development in the Netplex area was focused on easing multi-processing. With Netplex it is very easy to run code in several worker processes, e.g. for network servers. What was missing up to now, however, was an easy way to manage the collaboration of the processes.

Netplex worker processes got now a number of ways to talk to each other:

  • It is now possible to store variables in a common place, so that each process can get and set these (Netplex_sharedvar). Of course, this mechanism is typed.
  • There are mutexes and semaphores for synchronization (Netplex_mutex and Netplex_semaphore)
  • Each process can be directly contacted via a private channel, the so-called container socket. This is also an RPC mechanism, but unlike normal RPC servers the caller directly addresses a process (and not only a service in general, and the Netplex machinery automatically selects the destination process). There is also a directory so that processes can see which other processes exist.

The implementation of these mechanisms is not yet optimal, but the APIs are defined and backed by simple but robust modules. It is expected that in the future more sophisticated implementations will become available, e.g. the Netplex_sharedvar code use a shared memory object if the OS supports that.

Another addition are "levers". This kind of handle exists within the Netplex master process, but can be activated from the child processes. It is a kind of little RPC function for a special purpose: Sometimes the process model requires that certain functionality must be done within the scope of the master process. An example would be the start of another child process. By doing that via a lever, this action can also be triggered from any child process.

Besides that there are numerous smaller enhancements. Especially the module Netplex_cenv has been extended, e.g. there are now timers that can be attached to the Netplex event queue.


The development went into two directions: First, it was aimed at a more powerful RPC client implementation, and second, performance performance performance.

The improved client is called Rpc_proxy. All experience went in that I made at my Ocaml job at - lots of RPC calls in an unreliable environment (if you have hundreds of machines, one box is always down). Clients can now be recycled, they can react better on errors, and even load balancing and fail-over to alternate endpoints are now supported. (See the other blog posting, "The next server, please!".)

Performance improvements were achieved by two means: First, the XDR encoding and decoding was optimized. This has not yet come to an end yet, but certain XDR types like arrays of strings are now processed a lot faster. The other strategy was to replace many string buffers by bigarrays of char (see under "memory" above). This allows it to get rid of a number of copy operations, especially when large strings are transmitted via RPC. This new string representation is even accessible by user code via a new XDR type _managed string. This may avoid even more copies.


The API of Shell is mostly the same - only a few suspicious functions have been removed. The implementation, however, has changed a lot.

Shell now uses the new Netsys functions for starting subprocesses. As these functions are written in C, one gets some immediate benefits: Shell is now officially supported for multi-threaded programs because it is possible to do the signal handling right in C (but still, this is notoriously difficult). Also, there is now no risk anymore that the Ocaml garbage collector wants to clean up in the worst moment, namely between fork and exec.

Another benefit is that Shell works now also under Win32. The C part is completely different, though.


Not much has changed here, only that the old version Netcgi1 is gone now.


An exciting but still experimental addition are Camlboxes. They are designed as a fast way of sending messages between unrelated processes. Camlboxes use shared memory for communication.

This works as follows: If process 1 want to send process 2 a message, both have to map the same memory pages into their address space. The message is orignally an Ocaml value somewhere in the private memory of process 1. With the help of Camlbox this value is now copied to shared memory so that, and this is the pivotal point, process 2 can directly access the value without additional decoding step. This reduces greatly the overhead of message sending - actually only a relatively fast value copy is done, bypassing any kernel-controlled I/O devices.

For passing a short message, this takes now only a few microseconds. Most of that time is spent for synchronization, of course, not for copying. (On the hardware level, the synchronization is mostly done by moving cache lines from one CPU core to the other, so this is some kind of hidden copying. It is worth noting that Camlboxes are way faster on single-core machines than on multi-cores because this low-level synchronization is not required then.)

Camlboxes have one downside, though. They are not perfectly integrated into the garbage collecting machinery, and because of this, one has to follow some programming rules. In particular, there is no way to recognize that a message (or part of it) is no longer referenced, so messages are manually deleted, and there is of course the danger that bad code keeps references to (or into) deleted messages. For fixing this, we would need more help by the Ocaml GC.

Another problem is missing integration with Equeue. Camlboxes are synchronous by design - that's the price for their speed.

Where to get Ocamlnet 3

Look at the project page for the newest version and links to the manual, mailing list, etc.

Gerd Stolpmann works as O'Caml consultant. He is accepting new customers!
This web site is published by Informatikbüro Gerd Stolpmann
Powered by Caml