Tuesday, May 20, 2008

Declaring UI independence

Earlier this year, Stefan announced the availability of the YaST user interface engine separate from YaST itself.
The user interface engine, packaged in yast2-libyui (source code here),provides the abstraction from graphical user interfaces (Qt, Gtk) and text based (ncurses) user interfaces. It now can be used independently of YaST2 for generic (C++) applications.
Now what can you do with it ? First of all, you can use C++ to code YaST-like dialogs which display either in graphical mode (Qt or Gtk style) or text mode. This independence from the output media is a main feature of YaST.
Being separated from YaST, one can use the UI engine for stand-alone programs. A trivial example is a simple window with a text label and an 'Ok' button: HelloWorld.ccHelloWorld.cc Compile with
g++ -I/usr/include/YaST2/yui -lyui HelloWorld.cc -o HelloWorld
and run it via
./HelloWorld
Depending on the DISPLAY environment variable the UI engine automatically determines and loads the right plugin to render the dialog.
A simple unset DISPLAY will give you the ncurses look.

Enter SWIG

Coding dialogs in C++ takes away the highly useful edit-run mode of development which is possible with YaSTs YCP language.
With the help of SWIG, a generator for language bindings, one can now use his favorite programming language for coding dialogs. The initial release of the bindings supports Ruby (libyui-ruby), Python (libyui-python) and Perl (perl-libyui).
Swig can directly translate the C++ classes into e.g. Ruby classes making conversion of the above C++ code to Ruby straightforward: hello_world.rbhello_world.rb Translation to object-oriented Python gives you hello_world.pyhello_world.py Even Perl, although not object-oriented, gives reasonable code. But internals of the Swig-generated bindings are not for the faint-hearted ... hello_world.plhello_world.pl yast2-libyui comes with a couple of more examples.
SelectionBox1.cc shows how to fill a selection list, use buttons and update labels.
(SelectionBox1)
Here's the Ruby version: selection_box1.rbselection_box1.rb Enjoy !

Saturday, February 02, 2008

Open Source Meets Business - Day 3 (final)

(continued from here) The last day of open source meets business had presentations in the morning and put a spotlight on Microsoft in the afternoon.

Systems monitoring with open source

Again, a talk about Nagios. And again, Nagios was choosen for its cost effectiveness. It seems like most commercial monitoring tools charge per monitored device - customers really dislike this. Oberschwaben Klinik uses Nagios to monitor interfaces, infrastructure, services, applications and devices distributed across 200 hosts, 500 services in 6 locations. Total deployment time for Nagios was a single month:
  • 3 days initial setup
  • 1 week learning the tool
  • 2 days adaption to infrastructure
  • 3 weeks betaphase (getting the alarm thresholds right)
  • 3 days finetuning
And they only needed about one week of external consulting. Besides Nagios, the use Cacti for QoS monitoring.

Software for knowledge worker

Peter Pfläging, working for the City of Wien, pointed out that job roles have changed over the years from being topic specific towards knowledge workers, which he characterizes as
  • having many tasks
  • no applicable standards to approach problems
  • coerced to lifetime learning
  • lots of brainstorming to find solutions
  • decisions makers
and having to explain the stuff to upper management. They all face the problem of organizing information, prioritizing tasks and documentation (for themselves and others). Peter presented (his choice of) open source tools supporting this style of work on multiple operating systems.
  1. Mind maps OPML: xml data format, Freemind, DeepaMetha, WikkaWiki, Pimki
  2. Task prioritization GtD, ThinkingRock, d3/dcubed.ca, gtd-php, (Bloggers note: Tracks)
  3. Wiki Personal Wiki, Moin Moin, TiddlyWiki
  4. Blogging Weblog with private entries, document your knowledge. WordPress, Typo, Blojsom
  5. Desktop search For Windows there is Google desktop and Copernic. On Linux Beagle doesn't have a real competitor.
You should also prevent others from changing your document by using signed pdfs. Creating pdfs is a breeze on Linux but one needs PDFcreator on Windows. Then you can use Peters PortableSigner for signing.

LiMux

Here one learned about the current state of LiMux. The City of Munich choosed to develop their own 'base client' (running on Debian Linux) One thousand workstations are currently migrated (first 100 in 2006, 900 more last year) and about 5000 PCs already use OpenOffice. The base client is 'usability TUV IT certified' (Gebrauchstauglicher Basisclient) and their Linux Lernwelt learning tool won the European E-Learning award last year. The 'base client' core consist of (Debian) Linux, OpenOffice, Firefox and Gimp. Specific applications run browser-based, vintage (windows based) applications use Wine, virtualization or terminal server. The clients are managed by GOsa using FAI for deployment. GOsa itself is a set of PHP5 scripts used for deployment and configuration management (users, groups, mail, dns, dhcp, ...) using LDAP as a central CMDB. For update deployment FAI is currently used, doing inventory, license managementt, log analysis (audit) and hardware-monitoring. The upcoming version 2.6 of GOsa will support scheduled and load balanced mass deployment, replacing FAI.

Nagios at Stadtwerke Amberg

The city of Amberg suffers from an understuffed IT department and outsourced monitoring. This works fine with Nagios and the external consulting company is soo great, blah, blah, blah ...

How open is Microsoft after the EU ruling ?

This was mostly about the Samba/Microsoft agreement as detailed on Groklaw. Conclusion: with the current business model (public company, maximizing shareholder value), Microsoft will not open up too much.

Open source and Microsoft

Sam Ramji, director Open Source and Linux Strategy at Microsoft, tried to put his employer in a good light by pointing out that 50% of all open source deployments are on Windows. Half of all sourceforge projects run on Windows, 3000 are for Windows only. He continued to outline Microsofts open source strategy with Windows at its core, surrounded by OSS applications. (see also this post) A Microsoft based infrastructure (Active Directory, Systems Center, SQL Server) will grow Windows with help of OSS applications. Microsoft will develop free software to support the interface layer between the free and the proprietary world. This is done in the Open Source Software Labs (OSSL), currently established in Redmond and Cambridge, Mass. These labs focus on strategy, technical research and development of document formats, network protocols, security&identy, systems management, virtualization and application platforms. The long term goal is "respectful relationship to produce insights and technology to compete, interoperate and collaborate". Best quote (Ramji on Ballmer): "Its time to reset peoples perception of what he means when he talks"

Panel Discussion: Microsoft and open source

Dr Oliver Diederich (Heise Verlag) assembled Roger Levy (Novell), Jim Zemlin (Linux Foundation) , Paul Cormier (RedHat), Sam Ramji (Microsoft) and Dr Johannes Helbig (Deutsche Post) to discuss the relationship of Microsoft and the open source movement. The complete discussion was recorded and is available here.

Friday, February 01, 2008

Novell Brainshare registration fun

Novell Brainshare will be held March 16-21. 2008 at the Salt Palace Convention Center in Salt Lake City, UT. Registration is open now and you can also apply for Brainshare Connect to find other conference registrants with similar interests. The application form asks for your area of expertise, some work information, areas of interest and to choose your hobbies from a predefined list. Among this list are competitive eating and dumpster diving. I just wonder if these groups will do a BOF at Brainshare ...

Thursday, January 31, 2008

Open Source Meets Business - Day 2

(continued from here)

Workflow management with BPEL

BPEL, the business process execution language, can be used as interface between management (defining/modeling process requirements) and IT (implementing services). Its object-oriented approach makes it possible to divide & conquer large tasks into Business/Architecture/Processes. BPEL allows to apply a consistent and repeteable processes which can be measured/monitored. The basic concept is SOA (service oriented architecture). Bpel adds a recursive aggregation model for web services and workflow mgmt. This all should allow for service orchestration and programming in the large. BPEL is standardized by OASIS as WS-BPEL). A graphical workflow designer and debugger is available through the Eclipse project. One can download an open source implementation of both the BPEL4WS 1.1 specification and the WSBPEL 2.0 standard at http://activebpel.org

OPSI - open pc server integration

OPSI is an open source desktop management system available at http://opsi.org While it is based on a Linux server its primary targets are Windows workstations. Written in Python, it provides inventory, deployment and patch & update management through a java UI. The presenter gave a short demo and highlighted the nice interface for composing database queries. Windows admins wanting to deplay opsi shouldn't be afraid of the command line, though.

Complete scalability with integrated virtualization

The title gave the impression of getting some facts and the presenters title of 'Solution Architect' made me actually believe this. Boy was I wrong. I couldn't stand half an hour of marketing fluff and 'Redhat can do it all' without any proof. Had to leave early...

VirtualBox

This one was nice. VirtualBox is (yet another) open source virtualization solution. Unlike Xen, it provides full virtualization and is able to run unmodified guest even without hardware (Intel-VT, AMD-V) support. So it plays in the league of VirtualPC, VMware or Parallels. VirtualBox runs on Windows, Linux, MacOS and Solaris and supports all major operating systems (Windos XP, Windows Vista, Linux, Solaris, OS/2). I downloaded a copy and installed it on my OpenSUSE 10.3 laptop during the presentation and was impressed with its nice and intuitive configuration and management interface, highly recommend. VirtualBox is mostly used for windows virtualization and supports Microsofts RDP (remote desktop) protocol, implemented directly in the virtual graphics card. Then the VirtualBox server acts as terminal server, delivering graphical content to (a less powerful) PC. In this configuration, one can even use the USB ports of the dumb terminal, impressive. Other features are snapshots of running clients and shared folders between clients.

Collaborative Software Development

This talk, given by a consultant from McKinsey & Company, tried to put a spotlight on the influence open source has on traditional economies. It started to show how working in the open empowers individuals and communities through distributed co-creation, pro-sumption, firm of one / firm of one billion, interactions and collaborations It very much changes how people work and focuses on knowledge workers (see also 'Software for knowledge workers' on day 3) A couple of prominent development in the open projects were named
  • Linux
  • Wikipedia
  • we>me textbook (collaborative creation of learning material, see here for a broader scope)
  • oscarproject (open car engineering)
  • loncin motorcycles china (Using an open development and manufacturing process, they have huge cost savings)
  • prosthetics project (prosthetics cad design)
This new development style has a huge economic influence on the gross national product of countries (up to 15% are expected in the future). The presenter pointed out several times that working on open (source) projects is substantially different from traditional work. Only very few companies have realized this yet. All industries will be affected. First those with much IT, like Banks, Insurances, etc. Next are (car) manufacturers or similar companies with a big portion of high technology in the value chain. The media (broadcast) industry was also named. Big changes ahead.

What's next for Open Source

Given by Kim Polese, CEO of SpikeSource, a company selling 'shrink-wrapped' open source solutions, this presentation had a similar topic like the previous one. Key messages were open source is disruptive, dramatic market evolvement, huge effects on global economy. She continued to talk about here company (yes, one can make money with open source) and the usual marketing fluff. Coming trends will be
  • mass market devices (Google adroid)
  • proprietary and o/s convergence (Novells 'mixed source' strategy came into mind)
  • virtual appliances
  • online marketplaces (Amazon is Linux based)
  • consolidation plus proliferation

Open Source Barometer

Alfresco does open source enterprise content management based on a good deal of market analysis. The talk was a preview o the annual open source barometer focussing on the european and german open source market. Lots of graphs, trends and colorful view on the survey results. Looking at Alfresco deployments, there is Windows and Linux equality as an evaluation platform but Linux wins clearly when it comes to actual deployments. Looking more closely at Linux (for hosting Alfresco), SUSE Linux wins over RedHat by a factor of five. However, globally (looking at Linux in general) RedHat has four times more systems out there. Overall, Linux raise is undamped but the Novell/Microsoft patent agreement resulted in a clear kink for SUSE.

Linux System Management at Rewe

Yeah, great title and bad content (pure RHN marketing blurb). Go here if you really want more.

Nagios at Bundesstelle für Informationstechnik

Driven by ITIL, this german government agency needed
  • consistent monitoring, platform independent
  • fast deployment, extensibility
  • transparency
  • acceptance by people
  • Integration into HP Service Desk + HP Network Node Manager
  • Support for UC4
Main reasons for Nagios were cost effectiveness (cost for one server: 1500EUR with a commercial suite, 25EUR with Nagios), api, extensibility, scalability, integration in existing environment and ongoing development.

VMware releases Perl WS-Management library

VMware just announced the availability of a WS-Management (client side) library written in Perl "Starting with VMware's VIPerl Toolkit v1.5, an experimental version of Perl WS-Management library is included for infrastructure management with Web Services. The library currently supports 7 out of the 11 generic operations described in the WS-Management - CIM Binding. The library is available for download at http://www.vmware.com/support/developer/viperltoolkit" Nice ! Maybe this can act as a guideline for the openwsman language bindings. Lets see.

Wednesday, January 30, 2008

Open Source Meets Business - Day1

Heise Verlag held its annual Open Source meets Business conference in Nuremberg last week.

The conference was quite well attended, reportedly over 700 participants with approximately 699 from Germany, all in business suites ... Redhat was present with a booth (showing olpc, http://laptop.org). Novell was nowhere to be seen - strange.

Most presentations had about 80% marketing content, 10% about the business model and 10% actual information. Redhats virtualization talk was especially contentless, a marketing talk given by a 'solution architect'. I had to leave early.

Presentations where limited to 30 minutes (incl. Q&A), with 6 to 7 tracks in parallel. I tried to choose the one with systems management or software architecture relevance.

Presentations - Day1

REST

Ralf Wirdemann gave a good and easy to follow introduction to the REST architecture style. Nothing new, but its good to see that such topics are presented to (reportedly) CIO level management.
Too bad he had little 'real world' experience with either REST or Rails as his (non-)answers to questions showed.

SugarCRM: Is open source a viable business model ?

Short answer: Yes. Look at recent acquisitions: RedHat/JBoss 420M, Citrix/XenSource 500M, Sun/mySQL 1B. Nothing about SugarSRM as a product, but about the development (open) and business model (service & support).

Network Monitoring with Nagios

This was the first of four (!) talks about Nagios.
Monitoring is a hot topic for most IT admins. The presenter spend most of the time fighting the incompatibilities of MacOS Powerpoint with Windows Power Point and OpenOffice (doesn't show speaker notes ;-))
Filtering out the company marketing blurb, one learned that Nagios allows for cross-platfrom device and service monitoring. It uses its own client agent (available for Linux, Unix and Windows) but can also process SNMP management information from network devices. The client agent has a pluggable API for gathering information and there is a whole website dedicated to plugins.
The Nagios server, running on Linux only, provides the management infrastructure including a sophisticated alarm and notification system.
Customers seem to be quite happy with Nagios and only miss a reporting function. But this is planned for the future.

WS-Management

I presented my favorite topic, Web Services for Management. A remote management protocol providing true interoperable management capabilities between Linux and Windows.
Slides are available in german (I said it was a german conference, didn't I ?)

Wednesday, December 19, 2007

MDC presentations available

Anas asked me to make my Management Developers Conference presentations available, so here they are.

Web Service Management On Rails

In the first one, WS-Management On Rails, covers the beauty of accessing WS-Management and WS-CIM functionality through Ruby. The code follows the DMTF Technologies Diagram and consits of
  • rcim for the CIM Infrastructure layer
  • This implements the CIM metamodel of classes, properties and qualifiers.
  • mofgen to generate WS-CIM bindings
  • Mofgen is an extension to the Cimple MOF parser. It generates Openwsman client bindings for CIM classes from the class description contained within a MOF file.
  • rwscim for the CIM Schema class hierachy
  • This puts a wrapper around the bindings generated by mofgen, makes them available as a single Ruby module and ensures the correct class hierachy.
And here is a git repository containing a Rails application showing all this in action.

Web Service Management Application Enablement

Web Service Management Application Enablement is about using WS-Management as a transport layer for remote application access. Instead of implementing a separate daemon, protocol and data model, riding the WS-Management horse gives all of this almost for free. And its more secure. The dynamic plugin model provided on the Openwsman server side makes this particularly easy. The presentation shows how to plan and implement such a plugin and gives two examples. openwsman-yast for a simple, RPC-type approach and openwsman-hal which follows the WS-Management resource model.

Tuesday, December 18, 2007

Report from Management Developers Conference

About Management Developers Conference


Management Developers Conference (ManDevCon, MDC) is the annual conference of the Distributed Management Task Force (DMTF).
The DMTF is the leading industry organization for interoperable management standards and initiatives. Mostly known for their Common Information Model (CIM) and the Web Services for Management (WS-Management) standards.
The full conference schedule can be viewed here.


I already had the opportunity to attend this conference last year. This year, I was accepted as a speaker with two presentations about WS-Management.


Conference overview


The conference has three blocks, one for learning ('university day'), one for demo and interop ('interop lab') and one for presentations.

It was interesting to see how the conference topics changed year over year. Last year, protocols and APIs were still under discussion. In 2006, the WS-Management and WSDM (OASIS Web Services Distributed Management) protocols were still competing. This year, working implementations of various standards dominated.
From a protocol perspective, WS-Management is the clear winner with virtually every systems vendor showing implementations. Microsofts adaption of WS-Management for all remote management on Windows (WS-Management comes build into Vista and is available as an add-on to Server 2003 and XP) was probably the driving force here. Openwsman, an open source implementation of WS-Management provided by Intel, is also picked up by lots of embedded vendors.

The interop lab revolved around implementations for CDM, DASH and SMASH.

CDM, the Common Diagnostic Model, is a CIM extension for diagnostic instrumentation. Its primary use is for vendor-agnostic remote health evaluation for hardware. Hewlett-Packard uses this extensively for their systems and requires each of their component suppliers for test routines available through CDM.
DASH (Desktop and mobile Architecture for System Hardware) and SMASH (Systems Management Architecture for Server Hardware) target management and monitoring of hardware components based on the WS-Management protocol.


Attended presentations


  • Opentestman

  • Opentestman is validation test suite for ws-man, ws-cim and dash-1.0. Its a (wild) mixture of bash scripts and java based utility tools. Tests are described in xml-based 'profile definition documents' (PDD), making the tests data-driven. It currently covers all mandatory and recommended features of the WS-Management and WS-CIM standards. More than 160 test cases exist for all 14 DASH 1.0 profiles. [DASH 1.1 was released early December]
    [Hallway discussions showed, that the current implementation of Opentestman is in urgent need of refactorization. So don't look too close at the code or it might hurt your eyes.]

  • ITSM and CIM

  • ITSM, Information Technology Service Management, can be described as managing the systems management. The presentation gave an overview on existing technologies and asked for participation to model this topic in CIM. Currently, several (policy/modeling) standards exist for this topic, e.g. Cobit (Control Objectives for Information and Related Technology; mostly US, covering business and process mgmt), ITIL (Information Technology Infrastructure Library; mostly Europe, covering service and process mgmt) and CIM (resource mgmt). IT process management has seen a big push recently. Lots of tools and companies appeared in the last couple of years offering services.
    With SML, a service modeling language exists. Other areas like availability management, performance/capacity management or event/incident/problem management do not have any established standard.

  • Using the CIM Statistical Model to Monitor Datapresentation

  • Brad Nicholes from Novell showed recent work to integrate existing open source solutions (using non-standard models and protocols) with CIM.
    Ganglia, a "scalable distributed monitoring system for high-performance computing systems such as clusters and Grids" uses rrdtool (round robin database tool) to view stastistical data with different granularity.
    One feature of Ganglia is to provide trending information (as opposed to simple alerting) to support capacity planning.
    Ganglia consists of a statitics gathering agent (gmond) running on every client. These agents are grouped in clusters, sharing all information within the cluster to ensure failover capabilities. The statistics aggregation agents (gmetad) run on specific managment servers, reporting to an apache web frontend.
    Brad has defined a CIM model and implemented CIM providers to access the data. Its basically rrdtool access, thereby drastically reducing the amount of data transported over CIM.

  • CIM Policy Language

  • This was a report from the DMTF policy working group defining CIM-SPL.
    SPL, the simplified policy language, defines more than 100 operators to express relations (examples given: os 'runsOn' host, os 'hasA' firewall) and actions (Update of CIM properties, execution of CIM methods).
    There exists a cli tool and an Eclipse plugin for developing and testing policies. The Apache Imperius project is about to release a sample implementation. Similar plans exist for the Pegasus CIMOM.

  • Nagios through CIM

  • This was another example of bringing open source, but non-standard implementations and CIM together.
    Nagios is a very popular monitoring and alerting framework. It comes with a rich set of data gathering plugins, available on nagiosexchange.org.
    Intel has developed an adapter layer to expose Nagios data through CIM. One can also mix a traditional CIM provider with a Nagios plugin, filling only particular properties from the plugin.
    The source code is not available publically (yet...).

  • Cimple and Brevity

  • Cimple and Brevity are code generator tools making it easier to develop CIM providers and tools. Cimple is a CIM provider generator. It takes a CIM class description (MOF file) as input and generates stubs for a CMPI provider. This way, a developer does not have to fight with the provider API but can concentrate on the instrumentation part. [The amount of code generated is still huge. For SLE11, Python providers are the better choice for most cases.]
    Brevity tries to ease writing client tools. For people developing in C or C++, Brevity is worth a look.
    [For modern scripting languages, better bindings exist. E.g powerCIM for Python and rwscim for Ruby.]

  • Management Frameworks

  • This talk was meant as a call for help to collaborate on a client framework standard. There are sufficient standards and implementations for getting instrumenting managed devices. But on the management application side, everyone reinvents the wheel.
    Mergers drive this on the side of traditional (closed source) vendors, else they end up with lots of different APIs.
    The proposed 'integrated framework and repository for end-to-end device view' consists of an 'agent tier' (instrumentation), a 'service tier' (see below) and an 'application tier' (API for management applications).
    Services can be divided into infrastructure (discovery, collectors (caching), notifications) and core services (data model, topology, policy, scheduling, security, framework service management, domain specific services).
    This is ongoing work sponsored by Sun Microsystems looking for further participation.

  • openwsman

  • Openwsman is an open source implementation of the WS-Management and WS-CIM protocol standards. Its currently at version 1.5.1 with 1.6.0 scheduled for end of year and 2.0 end of march '08.
    It consists of a generic library, a client library and a server library and daemon. The daemon can be used in parallel to existing CIMOM implementations, translating between WS-CIM and CIM/XML. The mod_wsman plugin for Apache provides co-existance of WS-Management and the Apache web server through the same port.
    Main features for next years 2.0 release are
    • full compliance to the specification (The current WS-Management specification is still not final)

    • WS-Eventing (asynchronous indications, for alerting etc.)

    • A binary interface to sfcb (to connect to cim providers without a cimom)

    • better support for embedded devices

    • Filtering (CQL, cimom query language; WQL, WS-Management query language, xpath, xml query language)




Wednesday, December 05, 2007

Mapping the IT Universe

The annual Management Developers Conference organized by the DMTF started yesterday with the University Day.

DMTF (Distributed Management Task Force) is an industry organization leading the development, adoption and promotion of interoperable management standards and initiatives. Its mission is no less than Mapping the IT Universe by standardizing an object-oriented model (CIM) and related protocols (WBEM).

The conference was opened by a reception celebrating 15 years of DMTF and 10 years of CIM. Winston Bumpus gave a short overview on the history of the DMTF.

The DMTF was founded in 1992 as the Desktop Management Task Force, focussing on standards for managing desktop PCs. Two years later, the Desktop Management Interface (DMI) was published and quickly adopted. After releasing DMI 2.0 in August 1996, their mission was accomplished and the board considered closing the DMTF.

At that point, Patrick Thompson from Microsoft proposed to extend the management standardization beyond desktops and to cover the complete IT landscape. The original proposal already contained the key aspects and architectural components which are still valid today:

  • HMMS (Hypermedia Management Schema) — CIM today

  • HMOM (Hypermedia Object Manager) — CIMOM today

  • HMMP (Hypermedia Management Protocol) — CIM/XML over HTTP today

Initially a gang of five, namely BMC, Compaq, Intel, Microsoft and Sun accepted the proposal and continued funding the DMTF. In a tour de force with biweekly meetings over a period of 6 months the DMTF was able to present the Common Information Model 1.0 (CIM) in April 1997. It only covered the object-oriented modelling without any transportation protocol. This was added another year later (August 1998) with the Web Based Enterprise Management (WBEM) standard.

In 1999, the DMTF was renamed to Distributed Management Task Force, keeping the acronym (and all the advertising materials).


Today more than 200 companies with over 4000 participants contribute to the ongoing standardization efforts. In the 'Industry Showcase' and 'Interop Lab' rooms of the Conference, a wide variety of devices, tools and applications based on CIM are shown.

With the broad acception of Web Services for Management (WS-Management) true interoperable systems management now becomes a reality. Implementation range from baseboard management controllers (see here for drivers) and embedded devices to Open Source stacks and Microsoft Windows.

Monday, December 03, 2007

Memories from the past

I am in the heart of Silicon Valley visiting the Management Developers Conference which starts on Monday. More on that in a later post.

The first day I visited the Computer History Museum (CHM) with its marvelous collection of historic computers and parts. The majority of which is stored in the archive, vacuumed and wrapped in plastics preserved for future generations. Only a small fraction of artifacts is on display, dubbed visible storage.

Here one can see parts of the original ENIAC computer, a real IBM System/360, the Apollo Guidance Computer or a ZUSE Z23. Too bad I didn't bring my camera.

Whats unique about this museum are the - excuse me - human artifacts. Those guys and gals still living in Silicon Valley who designed and hacked the early machines. I really enjoyed a guided tour given by Ray Peck which was sprinkled with background information and anecdotes. Just wonderful.
Next was a live demonstration of the PDP-1 restoration project. One could see a 1961 computer up and running, demoed by Peter Samson and Lyle Bickley. They both hacked the PDP-1 during their student time at MIT. Peter is the original author of the PDP-1 music program and gave an example of his work. Hilarious !

On my way out, I picked up a free copy of Core, the museums biannual publication. The article about rescued treasures was most interesting, showing how challenging preserving history can be.

To quote from the museums flyer: "It's ironic that in an industry so concerned with memory, how quickly we forget."

Powered by ScribeFire.

Monday, August 13, 2007

Look who's sponsoring Ruby

Last weekend saw the Ruby Hoedown conference at RedHats Raleigh Headquarter, listing Microsoft as a sponsor. Interesting.

For those of you wondering Why Ruby ?, look at the conference website.
The Ruby language is growing exponentially, partially because it offers more flexibility than other more common languages.
Now add Suns support for Ruby last year, the famous Ruby on Rails web development framework and broad platform support, this language is still HOT.

Friday, July 27, 2007

Metadata as a Service

OpenSUSE bug 276018 got me into thinking about software repositories and data transfer again.

Problem statement

Software distribution in the internet age goes away from large piles of disks, CDs or DVD and moves towards online distribution servers providing software from a package repository. The next version of OpenSUSE, 10.3, will be distributed as a 1-CD installation with online access to more packages.
Accessing a specific package means the client needs to know whats available and if a package has dependencies to other packages. This information is kept in a table of contents of the repository, usually referred to as metadata.
First time access to a repository requires download of all metadata by the client. If the repository changes, i.e. packages get version upgrades, large portions of the metadata have to be downloaded again - refreshed.

The EDOS project proposes peer-to-peer networks for distributing repository data.

But how much of this metadata is actually needed ? How much bandwidth is wasted by downloading metadata that gets outdated before first use ?

And technology moves on. Network speeds raise, available bandwidth explodes, internet access is as common as TV and telephone in more and more households. Internet flatrates and always on will be as normal as electrical power coming from the wall socket in a couple of years. At the same time CPUs get more powerful and memory prices are on a constant decrease.

But the client systems can't keep up since customers don't buy a new computer every year. The improvements in computing power, memory, and bandwidth are mostly on the server side.

And this brings me to Metadata as a Service.

Instead of wasting bandwidth for downloading and client computing power for processing the metadata, the repository server can provide a WebService, handling most of the load. Clients only download what they actually need and cache as they feel appropriate.

Client tools for software management are just frontends for the web service. Searching and browsing is handled on the server where load balancing and scaling are well understood and easily handled.

This could even be driven further by doing all the repository management server-side. Clients always talk to the same server which knows the repositories the client wants to access and also tracks software installed on the client. Then upgrade requests can be handled purely by the server, making client profile uploads obsolete. Certainly the way to go for mobile and embedded devices.
Google might offer such a service - knowing all the software installed on a client is certainly valuable data for them.

Just a thought ...

Wednesday, July 18, 2007

Hackweek aftermath

Novell Hackweek left me with a last itch to scratch -- Cornelius' proposal of a Ycp To Ruby translator.

Earlier this year, I already added XML output to yast2-core which came in very handy for this project. Using the REXML stream listener to code the translator was the fun part of a couple of late night hacks.

The result is a complete syntax translator for all YaST client and module code. The generated Ruby code is nicely indented and passes the Ruby syntax checker.

Combined with Duncans Ruby-YCP bindings, translating ycp to Ruby should be quite useful as we try to provide support for more widespread scripting languages.

The translator is available at svn.opensuse.org and requires a recent version of yast2-core, which supports XML output and the '-x' parameter of ycpc.
Then run
  ycpc -c -x file.ycp -o file.xml

to convert YCP code to XML.
Now use the xml-ruby translator as
  cd yxmlconv
  ruby src/converter.rb file.xml > file.rb


Translating e.g /usr/share/YaST2/modules/Arch.ycp

{
module "Arch";
// local variables
string _architecture = nil;
string _board_compatible = nil;
string _checkgeneration = "";
boolean _has_pcmcia = nil;
boolean _is_laptop = nil;
boolean _is_uml = nil;
boolean _has_smp = nil;
// Xen domain (dom0 or domU)
boolean _is_xen = nil;
// Xen dom0
boolean _is_xen0 = nil;
/* ************************************************************ */
/* system architecture                                          */
/**
 * General architecture type
 */
global string architecture () {
    if (_architecture == nil)
        _architecture = (string)SCR::Read(.probe.architecture);
    return _architecture;
}

...
outputs the following Ruby code
module Arch
  require 'ycp/SCR'
  _architecture = nil
  _board_compatible = nil
  _checkgeneration = ""
  _has_pcmcia = nil
  _is_laptop = nil
  _is_uml = nil
  _has_smp = nil
  _is_xen = nil
  _is_xen0 = nil

  def architecture(  )
    if ( _architecture == nil ) then
      _architecture = Ycp::Builtin::Read( ".probe.architecture" )
    end
    return _architecture
  end
...
Preserving the comments from the ycp code would be nice -- for next Hackweek.
Btw, it's fairly straightforward to change the translator to output e.g. Python or Java or C# or ...

Tuesday, July 17, 2007

Smolt - Gathering hardware information

LWN pointed me to this mail from Fedoraproject inviting other distrubtion to participate in the Smolt project. Smolt is used to gather hardware data from Linux systems and makes it available for browsing.
They currently have data from approx. 80000 systems, mostly x86, which hopefully will grow in the future. The device and system statistics are quite interesting to browse. Besides hardware, smolt also tracks the system language, kernel version, swap size etc. It also tries to make an educated guess on desktop vs. server vs. laptop - typically a blurred area for Linux systems.

Once they offer an online API for direct access to the smolt server database, this really will be quite useful.

Monday, July 16, 2007

EDOS Project

Michael Schröders hackweek project is based on using well-known mathematical models for describing and solving package dependencies: Satisfiability - SAT
Apparently, some research on this topic was done before. The oldest mentioning of SAT for packaging dependencies I found is a paper from Daniel Burrows dating ca. mid-2005. Daniel is the author of the aptitude package manager and certainly knows the topic of dependency hell inside out.

However, the most interesting link Google revealed, was the one to the EDOS project.
EDOS is short for Environment for the development and Distribution of Open Source software and is funded by the European Commission with 2.2 million euros. The project aims to study and solve problems associated with the production, management and distribution of open source software packages.
Its four main topics of research are:

  • Dependencies With a formal approach to management of software dependencies, it should be possible to manage the complexity of large free and open source package-based software distributions. The project already produced a couple of publications and tools, but I couldn't find links to source code yet.
  • Downloading The problem of huge and frequently changing software repositories might be solvable with P2P distribution of code and binaries.
  • Quality assurance All software projects face the dilemma between release often - release early and system quality. One can either
    • reduce system quality
    • or reduce the number of packages
    • or accept long delays before final release of high quality system
    EDOS wants to develop a testing framework and quality assurance portal to make distribution quality better and measurable.
  • Metrics and Evaluation The decision between old, less features, more stable vs. new, more features, more bugs should be better reasoned by defining parameters to characterize distributions, distribution edition and distribution customization.

Interesting stuff for a lot of distributions out there ...

Monday, July 02, 2007

openwsman-yast now returns proper datatypes

After five days of hacking last week, a final itch was left which needed scratching. The YaST openwsman plugin only passed strings back and forth, losing all the type information present in the YCP result value. So I added some code to convert basic YCP types to XML (in the plugin) and from XML to Ruby (on the client side). Now the result of a web service call to YaST can be processed directly in Ruby. Here's a code example showing the contents of /proc/modules on a remote machine.
require 'rwsman'
require 'yast'
client = WsMan::Client.new( 'http', 'client.internet.org', 8889, '/wsman', 'user', 'password')
options = WsMan::ClientOption.new
schema = YaST::SCHEMA
uri = schema + "/YCP"
options.property_add( "ycp", "{ return SCR::Read( .proc.modules ); }" )
result = client.invoke( uri, "eval", options )
modhash = YaST.decode_result( result ) # hash of { modulename => { size=>1234, used=>3 } }
Supported are void, bool, integer, float, string, symbol, path, term, list, and map -- should be sufficient for most of YaST. The YaST class is here. You need at least version 1.1.0 of openwsman and openwsman-yast, both available on the openSUSE build service. And, btw, source code for openwsman-yast is now hosted on svn.opensuse.org

Thursday, June 28, 2007

Remote management with Rails

The Rails demo for remote systems management with WS-Man is available at the openwsman web site.
Just follow the install and configure instructions. In short you need
  • openwsman
    An open source implementation of the ws-management standard.
  • rwsman
    Ruby bindings for openwsman client operations.
  • Ruby On Rails
    Web development that doesn't hurt
  • Railsapp
    Rails demo application for rwsman
Once everything is properly installed, start the Rails web server with ruby script/server. Now point your browser to http://localhost:3000 and you'll see the startup page. Click on the text, then click on Discover and the Discovery page will appear.

Look closely at the Actions line for each host and you'll notice the YaST action for the openSUSE client. This client has my openwsman-yast plugin installed.
The demo application allows to start and stop the desktop (the xdm service to be precise) and to switch the desktop environment between KDE and GNOME. YaST operations

Doc has videotaped a demo, you can find it in the idea.opensuse.org blog.

YaST as a WebService

Thanks to openwsman and openSUSE hack week, Linux systems with YaST installed can now be remotely controlled via a WebService.

My idea is now available as a package in the openSUSE build service.

Today I itend to use the openwsman ruby bindings and its Rails demo application to show true remote management.

Stay tuned ...

Friday, June 22, 2007

A clean start

So, here it is now, my shiny new blog space. But how to start ? What to blog about first ? Sometimes the small things are the hardest ... But slashdot to the rescue. This post gave me a good idea for a good start.

How does YOUR keyboard look like ?

Those of you having a cleaning woman wiping the keyboard once per week can stop reading now. All the others, wanting to get rid of THIS sight read on ! I will show you how to make your keyboard shiny-and-almost-new by putting it into the dishwasher.

Using the dishwasher for keyboard cleaning

The following description is for simple Cherry keyboards, other brands might need a different approach. With the right tools and technique, this should work for any kind of keyboard. Here's a picture of the dirty keyboard I'm going to disassemble.
Putting the complete keyboard into the dishwasher might work, but after all its just the keys which need cleaning, not the electronics, cable or key mechanics. To start disassembly, turn it around to get access to the notches holding the case together. The upper and the lower case of the keyboard are held together with a number of L shaped notches (see picture below for a close-up), which have to be bend aside. (Is notch the right word for this ? Maybe a native speaker can come up with a better word.) Go and grab your toolbox and find a flat screwdriver or use a simple scissors, as I do. Be careful not to break it. The Cherry keyboard has four notches on the upper and five on the lower side. There are also three small ones in the middle, but these usually pop open without the need for a tool. Now the upperside of the keyboard can be lifted to open the case. As you can see, the upperside holds all the keycaps, the underside contains the mechanics and electronics. A lot of dirt usually accumulates on the black rubber mat which is used instead of coil springs you'll find in older (or more expensive) keyboards. Just take the rubber map out and clean it with a damp cloth. Below the rubber map, two plastic sheets with a metal layers (forming a capacitor) appear. The plastic sheets are wedged by the small circuit board in the upper right corner. Further disassembly needs a T8 Torx screwdriver. Removing the plastic sheets reveals a metal plate. This simply gives the keyboard some weight and keeps the downside from breaking if keys are pushed too hard. The metal plate is not fixed to the case and can easily been taken out. Ready for the dishwasher. Better use the economy setting, this should keep the washing temperature low enough to prevent the plastic from melting. Although normal dishes get dried to 'cupboard ready', water will still be hidden in the keycaps. Simply put the keyboard to a dry place for a couple of hours to let the remaining water evaporate. Reassembling is easy. Put the metal plate in, the plastic sheets, screw the electronics back in (ensure that the plastic sheet is below the circuit board), and put the rubber mat on top. Be careful and don't force it. All pieces have holes and guidance support from the underside of the keyboard case. As the last step, put both sides of the keyboard back together and press gently. You should hear a noticable 'click' as the notches snap back it. Thats it, now enjoy your shiny-and-almost-new keyboard !