- File reading module
- Assorted 'filter' modules
- File writing module
- Configuration module
- UI module
The basic idea of this part if data is read from disk (1), it is then
passed through a series of filters (2) that compress, encrypt, do byte
counting for statistics, endless possibilities, each one of these modules
will have access to the server to help it decide if it should process this
module or veto it being backed up. The reasoning is the server could have
a cache of common files of several machines. A module could be written to
check md5 sums and veto a file if the server already has a valid backup of
this file. This would allow you to effectivly back everything up, but cut
your media and network usage. Other such ideas are possible. After the
file has been filtered it will be passed to the writing module (3). This
module is responsible for writting the file to media or passing the file
off to the server. This module is the clearing house for all traffic to
the server.
Right now, XML seems to be the hot thing for encoding text files, text
used to be good enough, some people like using sql for somethings. For
that reason, the configuration is also read from a module (4). The idea is
that the rest of the program doesn't care how the conf is stored as long
as it can find out the values it needs.
Similarly with user interfaces (UI), today it's gnome and kde, before it
was curses and tcl/tk. Why limit ourselves to one implimentation (5)?