Risks
Index
Main
License
Introduction
People
Embedded products
Links
Project
Status
Management plan
Risk
Documentation
Use Cases
Architecture
Setup
Subsystems
Spirit
Insight
Legend
Javadoc
Runtime
Dev (full)

Abstract

Risk analysis is built from an empirical process. It takes advantage of our knowledge about existing architectures, products, and a vision of the market.

This document scopes the Shaman product, letting most of project aspects aside.

Risk matrix

Risk Measure
Another Open Source, free project appears, doing the same things as ours, with a better design, or more advanced. If this occurs during the first month of the first project round, consider dropping the Shaman project and join the other project.
After the first month, continue the Shaman project at least until the end of the first round. This will give an at least an overview of architectural problems and features feasability.
This measure is related to LC's internship.
Communication between Spirit and Insight server doesn't work well, because we've no experience of writing that middleware stuff.

We start the project with a vertical cut in the architecture, implementing a few Use Cases, which require all servers (Spirit, Legend, Insight) to work together. Problems should be detected and corrected sooner.

Components don't collaborate, due to version issues. Take a great care of libraries versioning. Use jars from official distros whenever possible, they provide source code which makes debugging easier.
Focus on technical points takes all the time, there is no time left for writing documentation. Focus on a release ASAP, which should include all reasonable stuff like documentation, automated tests, build script, launch scripts, etc. It will be easier to request feedback for the entire distro.
Insight server persistence takes too much time to implement. Use an alternate persistence solution with Prevayler.
Prevayler doesn't allow large data volumes. Rewrite the persistence layer, taking advantage of existing regression tests.
Integration tests are complex to run, due to servers lauching. Automate servers launching in a dedicated test framework.
Make all tests independant by restarting all servers for each test.
The whole system doesn't scale. Detect this sooner by running harness tests, and correct architecture.
Some unplanned features may be required for production use at greater scale (e.g. content pre-processing during import). Break what should be broken to add the required feature. Trying to plan all possible features at the start of the project would require much more time and would slow down our understanding of real problems (the ones which can be seen once the system has been implemented).



Copyright 2002 Laurent Caillette and l'Université René Descartes, Paris 5.
All rights reserved.