No, we win over the competition by delivering something better. That's why i think that we should limit our hardware efforts. Its better to say that "this mother board is perfect for a Linux computer" then to say that "Linux should work on any computer". Why should we bother reverse engineer some chip on some motherboard that does not comply to any standard, when there are many motherboards that do? This is where Apple got it right. They dont make OS X work with all hardware. They say "this is how Mac hardware works" and then they let everyone that complies get to be called "Mac hardware". Q) Will my printer work with Mac? A) Is it a Mac printer? Q) Will my printer work with Linux? A) Perhaps. That is only confusing. I also agree that there is a lot of work to do outside of driver space. But that is even more of a reason to not spend valuable resources working around hardware compatibility issues. If people wanna do driver development then there are work to be done in the FOSS radeon driver, for example. That is a FOSS driver and we have full documentation from AMD, and that's the kind of drivers that deserves our attention. On a higher level there are also work to be done. I previously stated that we lack a abstract IPC mechanism Like the Component Object Model (COM) in Windows. For example the bug we just had in Xorg is a symptom of this. There should not be a Xorg-specific IPC mechanism. Of course a standard IPC abstraction layer may have had this problem to, but that's not a reason to have one. We have lots of component models like XPCOM, DBus, Bonobo and KPart. However none of them provides what MS COM provieds. DBus is good, but it does not cover all IPC needs. Bonobo has some features that DBus covers. KPart is also nice but KDE-specific. When it comes to network IPC web services is the de facto standard and with fast infoset i doubt we can do anything faster. However we still need something that bridges local IPC. Web Services are nice but using network datagram services for local IPC is probably inefficient. One situation is when the client and server is in the same process, then using anything that copies data would be inefficient. Also we have some hardware-supported IPC in som processors, like for example Tilera Tile. A IPC library should be able to transparently use that. This is one of the reasons we need an abstraction layer, to use hardware like this we only need to change the abstraction layer and applications do not need to bother. Bugs is another reason. Abstraction layers of all kinds means less code redundancy, and this also means less bugs. How many implementations of CRC32 does a Linux desktop distribution provide? This is another issue we have to address. Yes we do want programmers to be able to chose how to do things. However having 50 identical implementations is not choice, its a breeding field for bugs. Some say that having both KDE and Gnome divides Linux. I do not at all agree on that, I think that having both improves Linux. Hoverer stuff they have in common should be shared code. One example of that is DBus, that is shared between them. Another thing that should be shared is Webkit. Currently there are a Webkit component for KDE and one for Gnome. To do that we need a common way do define component, such as windows COM. KDE has KPart for this. Gnome used to have the now obsolete Bonobo. Windows has a layer above COM for this that is called ActiveX, and is used extensively to share graphic component. Obviously ActiveX is far from perfect because the new .NET assemblies are not fully compatible with COM/ActiveX. I do not really know why, but we should analyze this so we do not design something with similar flaws. The reason that i bounce this issue here and not just implement something is that i realize that there may be a lot of pitfalls here that need to be considered. COM is considered flawed why a new model was needed in .NET. I dont really know why but think that it should be analyzed before we start coding. I want a model that can be easily used together with mono and java OOP without such problems occurring. When i was a windows developer moving to .NET "binding" was unknown to me. Most libraries was transparently usable using COM. In Linux every mayor language has "bindings" to every popular library. This means redundent code and higher probability for bugs. With a good IPC mechanism my PHP class would be able to call my C++ class that in turn calls a Java class. And once such a library exist it should be extended with a standard for custom controls, where KDE and Gnome can use each others components without having both a KDE and Gnome version of the same component. Another issue i have spent some time thinking about is the architecture of modern operating systems. We are seeing a development that goes towards a hypervisor plus supervisor model. Here I think it would be wise to make use of this fully. Ubuntu/Linux can currently act as both hypervisor and supervisor. However i think it would be best to have a specialized hypervisor distribution and a specialized supervisor distribution. Most Ubuntu installs would use Ubuntu both as hypervisor and supervisor, but even this use case has its advantages. But most importantly it stresses a new definition of Operating System. The hypervisor would take the hardware abstraction role and be more of a layer above bios then a part of the operating system itself. This is sort of a Hannibal strategy where we divide the front and flank the enemy from two sides. Microsoft would be stressed to make windows a supervisor that runs not only on top of their own hypervisor but on top of the Linux hypervisor. It already does to some extent but a more strict hypervisor-supervisor architecture would force them to comply. Just as Firmware is out of their hands, so will hardware. This would force them to solve the problem that Direct X has problems with this kind of architecture. The supervisor cannot have something like Direct X if that requires hardware drivers. And if they move Direct X to their own hypervisor then it will not work on other platforms. The only choice would be to split Direct X in a hardware component then works with other hypervisors, and a software component that would be easy for other supervisors to implement. In fact a hypervisor-supervisor standardization would make it impossible to use hardware vendor lockin tricks within the supervisor. Now with Windows as a supervisor rather then a classic operating system, Linux as a supervisor would be able to attack from the other flank. First of all neither Linux or Windows is exclusive, one of the good things with supervisor is that they can co-exist. Only the hypervisor is exclusive which at this point is irrelevant. This means that Windows games and applications is no longer a reason to not have Linux installed, and this without the headache of dual boot. This means that both can theoretically have 100% usage share, as using supervisors are not mutually exclusive. So lets make a Ubuntu Hypervisor spin and make sure there are supervisor images available of all other spins. And perhaps a tool easily create now copies of them in the Hypervisor. > My humble opinion: We'll likely not win over the mindshare of the > majority by berating the competition, nor by entering into a > feature-by-feature (or bug-by-bug) analysis of their products. In fact, > by continually mentioning their names and products we give them > additional free word-of-mouth advertising. The best way to gain > mindshare (and fix Bug #1) is to focus on what Ubuntu does best and > continually reinforce that message, politefully, and tactfully. > > > On 10-08-23 07:20 AM, Tom wrote: > > I have often found troubles with the new Windows in the first year of it's > > releaes. They tend to act a lot like alpha releases except you have to buy > > them for a price and there is little or no support ("blame the user", "that > > error message does not exist" etc) let alone efficient means of > > bug-tracking/fixing. Within approx the last year of official support the thing > > becomes really quite usable but it really starts to fly about the time official > > MicroSquish support ends. Of course by then it is extremely vulnerable to > > viruses and there are many known issues but thecrucial word is "known" rather > > than "denied". I have been visiting a few offices in past few weeks and almost > > all of them have brand-new machines with Xp. Win7 seems to be for home users; > > the type of users MicroSquish doesn't care about and doesn't mind annoying > > because they can't fight back and can be easily duped by "tech support" over an > > expensive phone call. > > > > Really glad to have this space to blow off steam knowing that most of you might > > disagree with the odd comment or be able to finesse a better or more precise way > > of sayign the same thing but who understand what i mean because of having tried > > an alternative to Windows. > > Regards from Tom :) > > > >