July 21, 2012
This is an abandoned effort, and is on-line only for historical purposes. No more development or maintenance are planned. Actually I believe that media distribution by an ad-hoc topology is feasible only if you own (or rent) a true CDN. If distribution is left to viewers, a peer to peer approach would be much better, as done by, and my new contribution to the free live Internet TV is named Kitchen TV.
January 18, 2008
After one year of stop, development re-starts... with a new look for this page. But, more will come soon
January 8, 2007
OpenCDN 0.7.7 offers the first release of the new Content Provider Kit, which is based on the Origin entity of the OpenCDN project. Contributing contents to OpenCDN may become a problem, because of the intricacy of the configuration files, and/or for the required knowledge about the encoder to used. So, we developed a web-based interface to the Origin configuration, basing it on the encoder capabilities of the VideoLAN Project.
The live streaming content can be distributed by an OpenCDN node based on a patched version of the latest available stable release of the Apple Darwin Streaming Server, downloadable here
May 18, 2006
progress report updated for the TF-VVC Face 2 Face meeting held in Catania
April 25, 2006
0.7.6 is a bug-fixed version of latest release, now it works properly
March 24, 2006
0.7.4 release implements the PYTE method of LastHop determination:
  • every Node, at registration time, communicates to RRDM the name and port of an image file, which can be accessed through an embedded HTTP server
  • the page built at the portal includes references to image files located at LastHop nodes
  • when the viewer's browser downloads these images form the HTTP server which runs inside of nodes, a Round Trip Time (RTT) delay is evaluated, measuring the proximity in between the viewer and every LastHop. Such a value is stored in association to the viewer IP address.
  • upon reception of UDP probe requests (see third paragraph of point 5 in README.routing), the LastHop informs the prober (RRDM or Transit node) about the RTT toward the Viewer IP address, which is communicated inside of the probe response.
  • after complete reception of UDP probe responses, the prober decides which node is the nearest to the viewer, on the basis of the reported RTT delay comparison
November 8, 2005
a short report about last year work, has been given at the 6th Terena TF-VVC  meeting, held in Utrecht
September 29, 2005
0.7.3 release can deliver Windows Media encoded content, transported by an Helix Universal Server.
Unfortunately, the disk on the computer hosting the public test page has died, after four years of honoured work. Thus, the twin live streaming of two Italian music TV channels, encoded in MPEG4 and Real (48 and 225 kbps) has stopped. Anyone who can provide a live streaming feed, is warmly welcome.
June 8, 2005:
0.7.2 release is out, featuring
  • hosting of different streaming technologies (i.e. Darwin, Helix) on the same machine, 
  • statistical gathering for Helix.
A progress report is given at the TF-VVC face to face meeting at the TERENA Networking Conference 2005, held in Poznam.
March 29, 2005
Redistribution of the 7th Annual SURA/VIDE Conference
March 14, 2005
0.7.1 distribution released, which features great speedup for probes and timeouts, together with updates and bug fixes
January 27, 2005
A Major Number 0.7 Release which includes twittering new features:
  • Inter-Entity Authentication by a shared secret
  • RTSP authentication support
  • Program pre- and post-requisite CGI invocation
  • Timestamps displayed for Registration and Surrogates data
  • Cleaner logs
  • Concurrent Teardowns
existing nodes are required to upgrade
January 22, 2005
the pdf presentation (in italian) has been updated to the one given at LinuxClub
December 21, 2004
support for Helix Universal Server Added
October 14, 2004
OpenCDN now hosted at SourceForge
October 11, 2004
local FirstHops avoid FirstMile bottlenecks
  • LastHop nodes, aimed at service of media to LAN clients, now acts also as the preferred FirstHop Tansit Relay, for origins which lie in the same LAN. This allow the stream to remain local (for local clients) in the LAN; the presence of an outside Transit relay, will guarantee an unique first mile traversal (for remote clients)
  • RRDM routing logic has improved, as it now handles nodes which are both TR and LH for different footprints. Loop avoidance is performed
August 12, 2004
two new kind features. Origin registration and First Hop Selection by the Origin
  • An OpenCDN daemon runs at content providers premise, registering metadata information to the RRDM. The announcement portal dynamically builds the OpenCDN request form, after having queried RRDM for registered Origin metadata
  • The Origin daemon also runs an XML-RPC server, invoked by the RRDM during SetUp, in order to locate the FirstHop which is nearest to the Origin. The latter performs a parallel UDP probe of FirstHop candidates, and the result is used by the RRDM for rooting of the distribution tree
July 22, 2004
TERENA website officially advertises the OpenCDN project. Thank guys!!
June 28, 2004
  • OpenCDN can now cope with network partitions or node failures, and recover from these events.
  • When a content source stops transmission, the RRDM didn't knows about that, and directed new requests toward disconnected surrogates.
  • Now, while surrogates are probed via UDP, the RRDM asks them if the relay still works, and if not, tries to re-build a distribution path.
April 16, 2004
  • Darwin Streamin Server 5 is now used. Previous DSS releases will no more be supported
  • RTSP DESCRIBE probes are used before to request media from other relays
  • improved logs readability
  • node re-registration loop added, coping with RRDM reboots
April 6, 2004
a short but detailed Overview is on online
March 25, 200
A pretty logo for this project! Seriuos advertisements will start now...
February 11, 2004
  • a common configuration file is used for storing informations needed in common for Nodes, RRDM, and CGI
  • more than a single Node can execute on the same machine, and the same Relay can serve more than a single CDN, i.e. two instances of the "node code" can use the same Darwin, registering themselves at two different RRDM, at the same time
January 19, 2004
this page is on line