From: Marcus Hufvudsson talos@algonet.se
Greetings Linux guru!
I recently read the Linux Journal May edition and some people had some serious security problems. I got some of them to, and in your answer to one you recommended the "Tripwire" program for more security. I hope you don't mind me mailing you (got the address from the article). Anyway you recommend ftp.cs.perdue.edu for downloading. But when I tried to connect it didn't respond. Do you know any mirrors or any other ftp that contains Linux security tools?
- talos (root today, gone tomorrow)
There was a typo in that article. It WAS supposed to be ftp.cs.purdue.edu -- but is now supposed to be at ftp://coast.cs.purdue.edu/pub/COAST (they've been moved).
Here's the full URL to Tripwire: ftp://coast.cs.purdue.edu/pub/COAST/Tripwire
You should definitely browse around and read some of the other papers -- and try some of the other tools out there at the COAST (computer operations and security tools?) archive.
Sadly it seems to be neglected -- the whole "tools_new" tree is dated "October, 1995" and is empty.
All of the good stuff there is under: ftp://coast.cs.purdue.edu/pub/tools/unix (including symlinks that lead back to the Tripwire package).
Apparently they don't do anything with the FTP site because the real work as gone into their web pages at: http://www.cs.purdue.edu/coast/archive/Archive_Indexing.html
Another more recent effort which will be of more direct interest to Linux admins is: http://skynet.ul.ie/!flynng/security/The Irish Computer Security Archives ... with the following being of particular interest: http://skynet.ul.ie/~flynng/security/bugs/linux/ ... and: http://skynet.ul.ie/~flynng/security/tools
Another good site (recently moved) is at: http://www.aoy.com/Linux/SecurityThe Linux Security WWW ... where I particularly like: http://www.aoy.com/Linux/Security/OtherSecurityLinks.html
One of these days I'm going to annotate the 600 or so links in my main lynx_bookmarks file and post it to my own web pages. But -- not this morning (3 am).
I spend so much time doing TAG (The Answer Guy) and other mailing list and newsgroup stuff that I never get to my own web pages. However the patch that I created to allow Tripwire to compile cleanly under Linux is on my ftp site and a link can be found somewhere under http://www.starshine.org/linux/ (I really have to organize those pages one of these days).
-- Jim
To: Jonathan Albrecht albrecht@algorithmics.com
When setting your prompt or dates or app-defaults you sometimes need those little %N, or %d, or %m substitution thingies. What are they and where can I get a list of what they mean?
They are "replaceable parameters" and are used by a variety of shells and applications.
They differ for each shell or application. For example I use bash -- and my prompt is:
PS1=[\u@\h \W]\$
Which looks like:
[jimd@antares jimd]$
When I'm in my home directory and logged in as jimd and would look like:
[root@main local]#
If I was 'root' on the host "main" and in the /usr/local directory.
zsh, and tcsh also have similar "meta sequences" for their shell prompts. Just read the man pages for your shell and search for "prompt."
X app-default and other xrdb (X Windows resource database) entries are pretty mysterious to me. But I imagine that the info about these sequences is mostly in their man pages somewhere. I'm sure it's all in the sources.
The %d syntax is most often seen in the C programming language's printf() and scanf() functions. There are various "format specifiers" that dictate how a particular argument will be formatted. This includes information about whether a value will be displayed as a decimal number, a string, a hexadecimal value -- and how wide the field will be, whether it will be left or right justified -- etc. The \c syntax is also used in C for inserting "non-printing" characters -- like newlines, tabs, and for specifying ASCII characters by octal or hexadecimal value.
Since programmers are used to this syntax in their code they often use a similar syntax when they write scripting languages (shells) and when they design the configuration file syntax for their applications.
I'm sorry there's no "single source" or reference of all of these. However there isn't. You'll just have to hunt through the docs and man pages for easy of the apps and utilities that you're interested in.
From: Cyrille Chepelov chepelov@rip.ens-cachan.fr
So far I've had the good sense to stay away from striping under NT and Linux. I've heard that the ccd code for FreeBSD is pretty stable, though.
Well, my linux partition is used <5% of the overall time, but sometime I need it to figure things -- once the "small" problem with disks ID was solved, there are no cohabitation problems between NT and Linux.
This sounds like a typically ignorant design decision. It seems to say to the world:
"Standards are for weaklings -- we don't need to follow them -- even when we created them!"
Sure, even if they did it unilaterally, it was up to them to at least loudly publicize what they did.
I disagree. "Unilateral" is completely anathema to "Industry Standards." It is totally arrogant to speak for an industry.
(We won't cover the issue of government regulatory bodies making determinations in a "unilateral" way -- since those aren't "industry standards" they are "government regulations").
Publicizing that you are violating industry standards doesn't improve interoperability. What other reason is there to create and publish a "standard" (even an ad hoc one).
If they think there's a real need to put proprietary information in the very first sector of the disk (the spot reserved for the MBR -- then perhaps they should announce that these disks won't have PC partitions at all. It then becomes a "all NT or nothing" decision for each disk.
I don't think there is such a need -- and I think their approach displays either a gross lack of consideration, creativity and foresight -- OR -- a deliberate act of hostility to those unruly customers who would dare use any "other" operating systems on "their" NT boxes (or maybe a little of each -- some from the programmers and some of the QA teams).
Microsoft can cop out with a line like: "We don't intend that NT Servers should be installed systems with other operating systems -- it is intended for dedicated systems."
It would irritate me. But I'm not one of their "important" customers anyway. Since most platforms outside of the PC market have an OS that's supplied by the vendor -- there isn't an expectation that those system will allow multiple operating systems to co-exist on the system (much less on the same drive).
However, in the PC market there is that expectation -- and has been for over fifteen years. IBM and Microsoft created that expectation (to co-exist with CP/M-86 and the UCSD p-system if my memory and reading of the history is correct).
Naturally the obvious place to put this sort of information would be in the logical boot record (what Unix/Linux refers to as a "Superblock"). This would only cost NT's code a few extra disk seeks at boot time -- seeks that it has to do anyway.
The reason (IMHO) why they put it in the MBR is that even an unpartitioned disk gets its ID. The ID is here for the disk, not the partition -- so it makes less sense to put it in the S-block (even if that sounds safer, cohabitation-wise. Those IDs are what they are -- disk IDs, not partition IDs.)
Classically an OS should ignore an unpartitioned disk. Why should the disk have an ID if it has no partition? If the purpose is to provide unique identification of filesystems so that the striping and mounting mechanisms won't fail as new drives are added to the system -- then you need a partition ID -- and you don't care about disk ID's at all. Additionally you want enough information stored in that ID to minimize the chance of inadvertent duplication and collision (for cases when we move a drive from one system to another).
Finally your mounting/mapping utilities should be robust enough to allow you to mount any of these stripe segments and get what you can off of them.
This sounds robust. NOT! Just what I want -- double the failure points for every volume.
Regardless of the OS, whenever you stripe, you double the possibility of not being able to mount. Not mounting at all (or mounting read-only) when something goes wrong can not be a blamable decision ! (and in the case of striped sets, mounting r-o makes little sense, since all structures are dispatched on both disks)
I can certainly "blame" a company for any deficiency that I perceive in their software. I select software to meet *my* requirements. Therefore I am the ultimate judge of what is a "deficiency."
My requirements for striping say that the loss of one segment or element in a striped set should not entail the loss of the data on the remaining segments. If no currently available striping system meets that requirement I'll avoid the use of the technology.
This means that a striping system should distribute "superblocks" and inode and directory entries in such a way as to keep them localized to the same segment as the data to which they apply (or duplicated on all segments).
(I realize that duplicating directory information on all segments may be costly -- and I understand that data files may cross multiple segments. Those are implementation details for the author(s) of the file system).
Out of curiosity: How many different striping systems have you used? The phrase "Regardless of the OS" seems awfully broad.
I will plead complete inexperience with them. My take on the term is that it refers to any technique of making multiple drives appear as a single file system (or volume) that doesn't involve redundancy (RAID) or duplication (mirroring/duplexing).
Is there a standard that specifies more implementation details? (i.e. does my set of requirement some how NOT qualify as a "striping" system).
Well, now that Microsoft has "spoken" we're probably all stuck with this [expletive omitted] forever. Please consider mailing a copy of your message and your patches to the LILO and fdisk maintainers.
The problem is : where are they (I tried to send it once, a few month ago, to an address which was given me as W. Almesberger's, but to no avail).
In my fdisk man page I see the following (under Authors):
A.V. Le Blanc. v1.0r: SCSI and extfs support added by Rik Faith. v1.1r: Bug fixes and enhancements by Rik Faith, with special thanks to Michael Bischoff. v1.3: Latest enhancements and bug fixes by A. V. Le Blanc, including the addition of the -s option. v2.0: Disks larger than 2GB are now fully supported, thanks to Remy Card's llseek support.
So it would seem that Rik Faith, Mr. Le Blanc, Michael Bischoff would be good choices.
The address I see for Werner Almesberger is: almesber@bernina.ethz.ch (from the lilo (8) man page).
If that gets no response than I'd post notes to comp.os.linux.development to see who is maintaining the code.
--Jim
From: Anders Karlsson andersk@lysator.liu.se
Hi, I read an article in the Linux Gazette where the author hadn't found any evidence for the rumors about ActiveX for Unix. By mistake I found a press release from M$ about this.
I believe what I said was that I had heard the same rumor -- but that the search engine at www.microsoft.com couldn't find any reference to Linux at all.
I don't know who (if any) is interested in this, but you can find it on: http://www.microsoft.com/corpinfo/press/1997/mar97/unixpr.htm
Yes. I see. This basically says that the job was farmed out to Software AG (http://www.sagus.com) which has a release schedule at:
DCOM Availability Schedule http://www.sagus.com/Prod-i~1/Net-comp/dcom/dcom-avail.htm
Let's hope that this isn't the beginning of a new M$-invasion, against a new platform or market, our Linux.
Luckily there's not much MS can do about Linux. They can't "buy it out." -- They can pull various stupid stunts (like tossing new values into partition tables, trashing ext2 filesystems, even exerting pressure on hardware manufacturers to develop and maintain proprietary adapters that require Microsoft written drivers). These will just make them less interoperable. IBM tried stunts like this in the early days of the PC cloning.
However I think the cat is out of the bag. All we as a community have to do is clearly continue our own work. When you buy a new computer -- as for Linux pre-installed (even if you plan on re-installing it yourself). If you don't plan to use Windows '95 or NT on it -- demand that it not be included in the price of your system and -- failing that -- VOTE WITH YOUR FEET!
Recently I saw an ad on CNN for Gateway. The ad went on about all the options that were available and encouraged me to call for a custom configured system. Since I'm actually looking at getting a small system for my mother (no joke!) I called and asked if they could pre-install Linux.
Now I will hand it to the sales dude -- he didn't laugh and he didn't stutter. He either knew what I was talking about or covered up for it.
Naturally the answer was: "No. We can't do that."
There are places that can. Two that come to mind are:
(Warning for Lynx users -- both of these sites use frames and neither bothers to put real content in the "noframes" section -- Yech!)
There are several others -- just pick up any copy of Linux Journal to find them.
Granted this is a small niche now. However, it's so much more than any of us back in alt.os.linux (before the comp.os.linux.* hierarchy was established) thought was possible just four years ago.
Even two years ago the thought of buying a system and putting Linux on it -- to send to my MOTHER (literally, NO computer experience) would have been totally absurd. Now it's just a little bit of a challenge.
What's exciting to me is the prospect that Linux may make it mostly irrelevant what hardware platform you choose. Linux for the Alpha, for SPARC, and mkLinux for PowerMacs gives us back choices -- at prices we can dream of.
It's easy to forget about the hardware half of the "Wintel" cartel. However, the hardware platform has had severe design flaws from the beginning. Hopefully we'll see some real innovation in these new hardware platforms. [The introduction of the IBM PC back in '81 caused the "great CP/M shakeout." It also caused me to take a 5 year hiatus from the whole industry -- out of disgust with the poor design of the platform. Even as a high school student I saw these flaws]
-- Jim
From: Bruce W. Bigby bbigby@frontiernet.net
Jim Dennis wrote:
The really important question here is why you aren't asking the support team at RedHat (or at least posting to their "bugs@" address). This 'control-panel' is certainly specific to Red Hat's package.
Well, I've tried communicating with RedHat and had problems. I registered and everything and tried to get support via e-mail. Something went wrong, although I followed their instructions, for reporting problems, exactly. At the time, I was at work when I read your web page and decided to give you a try. Thanks for all of the information!
I hope it helped. I too have been unsatisfied with Red Hat's level of support. Not that I expect alot of complex personal attention for a package that only costs $50 -- but I was calling representing the US Postal Service's Data Processing Center -- and I was willing to put up about $50/hr for the support call(s).
Alas they just didn't have the infrastructure in place.
Yggdrasil has a 900 line for support -- and Adam Richter has been doing Commercial Linux longer than just about anyone else (SLS might have been there earlier -- but I haven't heard anything about Soft Landing Systems in years).
Yggdrasil also publishes _The_Linux_Bible_ and has a video cassette tutorial on Linux. Unfortunately I haven't installed a copy of their distribution, Plug and Play Linux, for a couple of years. Slackware and later Red Hat seem to have won the popularity contest in recent years -- and
Unfortunately I've never used Yggdrasil's tech support services. So I can't give a personal recommendation. They do have two pricing plans ($2.95/min. US or $100 (US) for one "guaranteed" issue resolved) and they do mention that the support is available to Linux users regardless of what distribution you're using.
Usually I've managed to bang my head on problems hard enough and long enough that they crack before I do. So I haven't needed to call yet. One would hope that -- with my "reputation" as "The Answer Guy" -- I'd be able to stump them. However Adam Richter has been at this a lot longer than I have -- and was selling Linux distributions before I'd even heard of Linux -- when I was barely starting to play with a used copy of Coherent. So, maybe the next time I have a headache I'll give them a call. I think I'm still entitled to one freebie for that subscription to Plug & Play from a couple of years ago.
Meanwhile, if anyone else has used this service -- or has been using any other dial-in voice support service for Linux -- please let me know. I'll try to collate the opinions and post them in an upcoming issue of LG.
For details look at: http://www.yggdrasil.com/Support/tspolicy.html
[Note: I don't have any affiliation with Yggdrasil or any other Linux vendor -- though several of them are located within a few miles of my home and I do bump into principals for a couple of them at local users groups and "geek" parties]
Another company that offers Linux (and general Unix) support and consulting is Craftworks I've worked with a couple of their consultants before (when I was a full time sys admin and they were providing some on site expertise to handle some overflow). They don't mention their prices up front (which forces me to suspect that they are at least as expensive as I am). I'm also not sure if they are available for short term (1 and 2 hour) "quickshots."
I suppose I should also mention that I'm the proprietor of Starshine Technical Services. My niche is providing support and training for Linux and Unix system's administrators. I also offer off site support contracts (voice, and dial-up or via the Internet using ssh or STEL). Normally I don't "push" my services in my contributions to Linux Gazette -- I just do this to keep me on my toes.
-- Jim
From: Chris Bradford reynard@gte.net
I have tried and failed to get a fully working ppp link up with GTE Internet Services. When I start pppd manually after dialing in using MiniCom, it'll start the link, and ifconfig shows that it's up and running. However, when I try to ping any site other than the peer, I get a 'Network Unreachable' error on every single packet that ping tries to send out. I'm using Slackware 3.2 w/ pppd v2.2f on a 486SX w/ 8MB of RAM and a 14.4K bps modem on /dev/cua3.
What's your advice to me?
What does your routing table look like? (Use the command netstat -nr to see that).
Your ppp options file (usually /etc/ppp/options) should have a default route directive in it. That will set the ppp0 link as your default route.
That's usually what "network unreachable" means.
You'll also need to have a proper value in your /etc/resolv.conf. This is the file that your "resolver libraries" use to figure out what DNS server they should ask to translate host/domain names into IP addresses. Basically all applications that do any networking under Unix are linked with the resolver libraries.
-- Jim
From: Gregor Gerstmann gerstman@tfh-berlin.de
Hi Mr. Jim Dennis,
Thanks for your e-mail remarks in reply to my remarks regarding file
transfer with the z protocol in Linux Gazette issue17, April 1997. In
the meantime I received an e-mail that may be interesting to you too:
Hello!
I noticed your article in the Linux Gazette about the sz command, and really don't think you need to split up your downloads into smaller chunks.
The sz command uses the ZMODEM protocol, which is built to handle transmission errors. If sz reports a CRC error or a bad packet, it does not mean that the file produced by the download will be tainted. sz automatically retransmits bad packets.
If you have an old serial UART chip ( 8250 ), then you might be getting intermittent serial errors. If the link is unreliable, then sz may spend most of its time tied up in retransmission loops.
In this case, you should use a ZMODEM window to force the sending end to expect an `OK' acknowledgement every few packets.
sz -w1024 Will specify a window of 1024 bytes.
I'm familiar with some of the tweaking that can be done -- and the fact that it is a "sliding window" protocol. However I still maintain that Kermit is more reliable and gets better overall throughput over an unreliable connection.
Also ZModem is designed for use on 8-bit serial lines. Kermit can be used easily over TCP connections and on 7-bit serial connections. You could definitely use the C-Kermit package from Columbia University however. The Kermit implementations from other sources are usually reliable enough -- but slower than molasses compared to the "real" thing.
From: Pedro Miguel Reis reis@aaubi.ubi.pt
Hi Jim. I have a simple question to you :) ! How can i put my video card to work under Linux ? Its an Intel Pro-share. I would like to save a jpg pic every 1 or two secs.
Thx for your time.
The Intel ProShare is a video conferencing system. These are normally not called "video cards" in the context of PC's because the phrase "video cards" is taken to refer to one of the cards that drives your video display for normal applications and OS operations (i.e. a VGA card).
There are several framegrabbers that are supported under Linux. However it doesn't appear that the Intel ProShare is supported under any for of Unix. Of course that's just based on a few searches of their web site -- so it's not from a very reliable source on the subject. (I swear, the bigger the company the worse the support information on their web site. You'd think they'd like to trim some of the costs of tech support that their always griping about).
Naturally you should contact their support department to verify this (or be pleasantly surprised by its refutation).
Here's a couple of links I found that are related to video capture using CU-SeeMe (a competing technology to Intel's ProShare):
Basically CU-SeeMe uses "off the shelf" video cams -- like the Connectix QCam (which goes for about $100 in most places). It also uses any of several sound boards.
Unfortunately the simple answer to your question may bd desktop camera.
-- Jim
From: midian@home.ifx.net
Can you tell me if it is possible to set up a Linux system on a Zip disk and where I could find info on doing this? I found a file that
It should be possible. I don't know where you'd find the info, though. I'd start by looking at the Linux HOWTO's collection. There is a HOWTO on Zip Drives with Linux (even the parallel port version is supported).
I'd look at putting DOSLinux on an MS-DOS formatted (FAT) Zip disk. DOSLinux is a very small distribution (about 20Mb installed) which is designed to be installed on a DOS filesystem. It uses LOADLIN.EXE (which I've described in other "Answer Guy" articles) which basically loads a Linux kernel from a DOS prompt -- and kicks DOS out from under itself.
You can find that collection of HOWTO's at: http://sunsite.unc.edu/LDP/HOWTO/ (and various mirrors).
You can also find a copy of DOSLinux at 'sunsite' and most mirrors.
I use DOSLinux on my laptop (an OmniBook 600CT) and my only complaint has been that it wasn't configured to support the power management features of my laptop.
Frankly I'm not even sure if Linux' APM support will work with the Omnibook at all. I've heard that the PCMCIA adapter is basically too weird for them (which is a real bummer to me).
You have to watch out if you get a copy of DOSLinux. The maintainer, Kent Robotti, has been making frequent sometimes daily changes to it (or was a couple of months ago).
describes this process IF you have a pre-existing Linux system to install from. I am running a Win95 system with absolutely no hard drive space available. Thanks for any info.
Are you sure you can't even squeeze twenty or thirty meg? With that you can get DOSLinux installed on your normal hard drive -- which is likely to offer much more satisfactory performance. The ZIP drive is likely to be a bit too slow at loading programs, share libraries and DREADFUL if you do any swapping.
Of course if you boot Linux from a Zip disk (or using the "live filesystem" offered by some CD's) you can mount your DOS (Windows '95) partition(s) and create a swap file there.
Although most people use swap partitions -- Linux will allow you to create swap *files* (see the 'mkswap' and 'swapon(8)' man pages for details).
Note: since you don't have a copy already installed I realize that you don't have the man pages handy -- however you can read those man pages by looking at: http://www.linuxresources.com/man.html
The 'swapon(8)' refers to the man page that's in section 8 (system administration tools) of the system. That's necessary because there's also a man page in section 2 (system calls) which the man command will normally display in precedence to the one you want. So you use a command of the form 'man 8 swapon' to tell the manual system which one you mean. This is unnecessary with most commands since most of the ones you'd be looking for -- most of the time -- would be the "user commands" in section one. Also most of the administrative commands, like mkswap, don't have functions with a conflicting name. This is just one of those quirks of Unix that old hands never think of while it gets novices climbing the walls.
When you use the online man pages at ssc.com (the publisher of the Linux Journal and the Linux Gazette) the form is a little confusing. Just check the "radio button" for "( ) Search for a command" and put "8 swapon" (a digit eight, a space, and the word "swapon") in the text field (blank). Ignore the "Section Index" and the section selector list below that.
Lastly, I'd like to make a comment about running Linux with "absolutely no disk space"
DON'T!
With hard disks as cheap as they are now it doesn't make any sense to try to learn an advanced operating system like Linux without plenty of disk space. Buy a whole hard disk and add it to your system. If you already have two IDE drives -- see if your controller will support four. Most EIDE controllers have two IDE channels -- which allow two IDE drives each on them. If you have a SCSI controller than it seems *very* unlikely that you'd have the whole chain full.
(My old 386 has an old Adaptec 1542C controller on it -- with three hard disks, a magneto optical, a DAT autochanger tape drive, a CD drive and a CD writer. That's full! But, while other people have been buying 486's, then DX2's, then Pentiums, and upgrading their copies of Windows and Office -- I've been filling out my SCSI chain -- so that's a five year accumulation of toys!)
If you really can't afford $200 on a new hard drive -- ask around. You might find a friend with a couple of "small" (200 Mb) drives around that they can't use. I have a couple myself (spare parts drawer).
If you try to run Linux with no disk space you probably won't be satisfied. You can install a base system (no X Windows, no emacs, no kernel sources, no dev. tools, no TeX) in a very limited disk space. That's fine if you know exactly what the system is going to be used for. It's perfect for routers, gateways, and terminal servers -- and I see people putting together a variety of custom "distributions" for these sorts of dedicated tasks. I've even heard that some X Terminals (diskless workstations) use Linux with etherboot patches. In ;login (the magazine for members of USENIX/SAGE -- professional associations of Unix users and Sys Admin's) someone described their use of Linux as a method for distributing software updates to their Win '95 boxes across their networks. Apparently they could squeeze just enough onto a Linux boot floppy to do the trick.
However, I'm guessing that your intent is to learn a new OS. For that you want a more complete installation -- so you can play with things.
-- Jim
From: Vivek Mukherji vivekmu@del2.vsnl.net.in
I bought a book on linux titled "Using Linux,Third Edition by Que Inc." It had Redhat CDROM with it, but when i tried to install it, it did not recognize the REDHAT CD though it previously made the boot disk and supp disk from the CD. It gave the following error after asking me for source of media i.e. from which drive or local CDROM or FTP or NFS I am going to install it.The error message was: "That CDROM device does not seem to contain Redhat CD in it "
There seems to be no damage on the CD i.e. no physical damage.I think there must be some other way to install it after all i have paid US$ 60 Dollars for that book. please reply me soon.
yours truly
Vivek Mukherji
When you select "CD-ROM" as your installation medium -- what interface are you having the setup program attempt to use?
When you use the CD to create your boot and supplemental diskettes you are presumably using DOS -- which has its own drivers to access the CD.
There are many sorts of CD-ROM drives:
CD-ROM and tape drive support came a few years after the IDE interface became popular for hard drives. ATAPI is an ad hoc standard between those interfaces and these other types of drives. It is an "applications programming interface" to which the drivers must be written. Typically all support for ATAPI CD-ROM and tape drives must be done in software.
EIDE is a set of enhancements to the IDE spec. The most notable enhancement is the ability to support drives larger than 528Mb (which was the old BIOS limit of 1024 cylinders by 63 sectors by 16 heads). This is usually done via extended ROM's on the controller, or enhanced BIOS ROM's on the motherboard -- or possibly via software drivers (which are OS specific, naturally).
In addition to those to types of CD-ROM drive there are a variety of proprietary interfaces such as the Mitsumi (very popular for a while -- as it was the cheapest for a while), Sony, Wearnes/Aztech, and others.
Linux supports a very wide variety of these interfaces. However -- it's vital to know what you have. You also might need to know "where" it is. That is to say you might need to know I/O port addresses, IRQ's, DMA settings or parameters. You might also need to pass these parameters along to the kernel as it boots.
Another issue is the version of your distribution. Most books are printed in large batches -- so they have a long "shelf life." Most Linux distributions change a couple of times a year. Red Hat, in particular, seems to be putting out a new version every 2 or 3 months. Most of these include significant improvements.
So your money is probably much better spent on the distribution itself rather than trying to get a "bargain" in a book and CD combination. Specifically I recommend buying any book solely on it's merits. I don't approve of CD's full of software included with a book unless the software has been stable for some time.
CD's with sample code, HTML and searchable text copies of the books contents, clip art or fonts related to the book, even large bookmark files of related web sites, custom software by the authors -- those are all excellent ideas; otherwise it's shovelware that adds a buck to the production costs (fifty cents for the CD and another fifty cents for the little glue-on vinyl holder and the additional handling) -- and twenty bucks to the price.
So, another thing to try is a copy of the latest Red Hat (4.2) or Debian or whatever. In any event you really need to know the precise hardware and settings for your machine.
-- Jim
From: Michael Sokolow mxs46@po.cwru.edu
Dear Ladies and Gentlemen,
Given the previous discussion about cookies, could someone explain to me
(or point out a topic in help, URL, etc.) just what ARE cookies?
Search the Netscape web site.
Here's an independent answer courtesy of "The Answer Guy" (Linux Gazette's nickname for me):
In programming terminology -- specifically in discussions of networking protocols (such as HTTP and X Windows) a "cookie" is an arbitrary data token issued by a server to a client for purposes of maintaining state or providing identification.
Specifically "Netscape HTTP Cookies" are an extension to the HTTP protocol (implemented by Netscape and proposed to the IETF and the W3 Consortium for incorporation into the related standards specifications).
HTTP is a "stateless" and protocol. When your browser initiates a connection and requests a document, binary or header the server has no way of distinguishing your request from any other request from your host (it doesn't know if you're coming from a single-user workstation, or a multi-user Unix (or VMS, MVS, MPE, or whatever) host -- or the IP address that it sees as the source for this request is some sort of proxy host or gateway (such as those run by CompuServe and AOL).
Netscape cookies are an attempt to add and maintain state between your browser and one or more servers. Basically on your initial connection to a "cookie generating" site your browser is asked for a relevant cookie -- since this is your initial connection there isn't one -- so the server prefers one to your browser (which will accept it unless it's not capable of them, or some option has been enabled to prevent it or prompt you or something like that). From then on all other parts of that site (and possibly other hosts in that domain) can request your cookie and the site's administrators can sort of track your access and progress through the site.
The main advantage to the site is for gathering marketing statistics. They can track which versions of a web page lead to increased traffic to linked pages and they can get some idea how many new and repeat visits they're getting. (Like most marketing efforts at statistics there are major flaws with the model -- but the results are valid enough for marketdroids).
There are several disadvantages -- including significant privacy concerns. There are several tools available to limit the retention and use of cookies by your browser (even if you're using Netscape Navigator). PGP Inc (the cryptography company) has a link on their site to one called "cookie cutter" (or something like that).
About the only advantage to some users is that some sites *might* use cookies to help you skip parts of the site that you've already seen or *might* allow you to avoid filling in forms that you've already filled out.
Personally I think cookies are a poorly chosen way to do this -- client-side certificates (a feature of SSL v. 3.x) is a much cleaner method (it allows the user to get an maintain cryptographically strong "certificates" which can be presented to specific servers on demand -- this exchange of certificates involves cryptographic authentication in both directions -- so your browser knows it isn't authenticating to some bogus imposter of a server -- and the server knows that your certificate isn't forged.
SSL client certificates allow you to establish accounts at a web site and securely interact with that site. Cookies can't do that. In addition many people have a vague notion that "cookies" where "snuck in" under them -- so they have a well-deserved "bad press."
-- Jim
From: A Stephen Morse morse@sysc.eng.yale.edu
Dear Mr Dennis:
I currently own an IBM 560 with a one gig hard disc which
has both a win95 partition and a 200m Linux partition
running version 2.0. We plan to upgrade today to a 2gig
Is this one of their "ThinkPad" laptops?
hard disk which accepts its data from the old disc through the PCMICA ports using a special piece of hardware. I believe the drive is called Extreme Drive. We also have available versions 4.1 and 4.2 of Linux on floppies (by the way 2.0 = 4.0 above). So far we've not been able to get any advice on how to proceed.
"...using a special piece of hardware."
I love that term "special." Sometimes you have to
say it with the right inflection
Any suggestions. We are not super strong with Linux etc.
I think the question is:
How do I backup my current drive and restore it to the new drive?
(with the implication that you'd like to use this "special" device and just "copy" everything across).
There are several ways of backing up and restoring a Linux system. If you have an Ethernet connection to a system with lots of disk space -- or to a system with a tape drive you can do interesting things of the form:
dump -0f - | rsh $othersystem "dd of=$path_or_device ..."
If you can borrow or purchase a PCMCIA SCSI controller that Linux supports on this system you can hook up an external hard drive or tape unit and use that.
Those are the most straightforward methods for getting *everything* across.
Another approach is to identify just your data (maybe you keep it all under your /home/ and /usr/local/ directory trees -- or maybe you *should*). Now you get your new disk, install it, get some upgrade of your favorite Linux distribution (I hear the new Debian 1.3 is pretty good), install and configure that and -- finally -- just restore the selected portions of your data that you want.
If you're concerned about the potential loss of data or down time from any of these methods you might also consider renting a system (desktop or laptop) for a week to use while you're straightening things out on your main system. This is advice to consider any time you're doing a major hardware upgrade to an "important" system.
Interesting question!
Do any of the computer rental services offer Linux systems?
(PCR, Bit-by-Bit -- who else is in that business?)
-- Jim
From: sloth lsoth7@hotmail.com
hi. whenever i try to install linux (so far i have tried redhat, Slackware and Debian) the install program crashes at random times. I have tried removing all unnecessary hardware, ie sound cards etc, but it doesn't seem to make a difference. I have a Intel p150mhz, triton VX main board, s3virge graphics card, 16mb ram and a 2.0gb quantum harddisk. Any help would be MUCH appreciated! cheers, sloth...
Have you had your memory thoroughly tested?
I would take out your memory (presumably they're SIMM's) and bring them into to a good repair shop for testing. I DON'T recommend software diagnostics for this (like AMIDIAGS, Norton's NDIAGS, "System Sleuth" etc).
Do you run any other 32-bit software on this system? (Win '95 and Windows 3.x don't count)
Can you install and run NT, Netware, or FreeBSD?
I've seen motherboards that just wouldn't handle any true 32-bit OS for sustained use (presumably buggy chipsets) -- that's why Novell and Microsoft have these "compatibility" lists of motherboards.
Have you tried taking out the fancy video card and putting in a simple VGA (no frills -- Paradise chipset)?
Most of the Linux install scripts and programs (different for each distribution) just use text mode. Therefore it's very unlikely that the video card *type* is a problem. However if your particular card has a defect it could be something that only affects your system under Linux or some other OS'. It's a long shot, and some EE (electronics engineer) might tell me it's impossible -- but I'd try it anyway.
(I keep a couple of spare old VGA cards and even an old Hercules -- monochrome graphics -- card around for just these sorts of testing).
What sort of hard disk controller are you using? (IDE? SCSI?)
Some IDE controllers have buggy chipsets (some of them are even supported by compile time options in the Linux kernel). However, IDE controllers are cheap -- so keeping an extra around for testing is a very small investment.
SCSI host adapters are somewhat touchier and more expensive. Some of them are nominally supported by Linux (and other OS') but aren't worth keeping in your system. For example the Adaptec 1542B was a piece of junk. At the same time I use Adaptec 1542C and 1542CF and the various 2940's without hesitation.
RAM is the most likely culprit. The motherboard chipset is another possibility. A defective video card or a buggy HD controller are next in line.
It's possible that you're system has some sort of bizarre "top memory" which requires an address range exclusion or that you need to "reserve" some I/O ports so Linux won't use them or probe into them for hardware. You could spend a career trying different "stripped down" kernels on boot floppies and learning all the idiosyncrasies of your hardware. However -- it's probably more profitable in the long run to replace any hardware that's causing trouble.
The advantage of PC hardware is that it's cheap and widely available. It's curse is that it's often *cheap* and the specs are *widely* interpreted. Now that Linux is becoming available on some other hardware platforms -- and especially now that we're seeing "clones" of SPARC, Alpha, and PowerPC systems for rates that some of us can afford -- we might see some advantages from stepping away from the hardware half of the WIntel cartel.
-- Jim
From: Steven Smith ischis@evergreen.com
GNU's gcc is part of the slackware package that I have loaded on my system. I can and have compiled and linked C code.
I can compile the standard C++ code below (if I haven't miss entered the code but for some reason the C++ libraries will not link correctly (ie. i get and error):
#includ <iostream.h>
I think you mean
#include ...
main() { cout << "hello world\n"; } Poor form. Unix programs should be int main ( int argc, char * argv[] ) ... or at least: void main () ... ---------------- gcc -c program_name.C <- no errors gcc program_name.C <- errors
Do you know what might be missing?
Your error messages.
Here's a way to capture sessions when you're trying to write messages to the Linux User's Support Team , to me or to the Linux Programmer's Mailing List ,or any of the appropriate news groups:
Get to a shell prompt. Issue the command: script ~/problem.log Run your test (demonstration of the problem). Back at the shell prompt, type Ctrl-D or issue the exit command. Edit the ~/problem.log file (take all the weird escape sequences out).
An easier way is to use emacs' "shell-mode" -- just start emacs and use the M-x shell-mode command. This creates a shell buffer (a sub task in emacs) which allows you to run tty style programs (no full screen "curses" stuff). The output from these shell commands will appear in this buffer and you can use normal emacs cursor, scrolling, cut, and paste operations to work with that output. For example I pasted your program into a new buffer, saved it, "fixed" a couple of minor things, switched to my shell mode buffer (I usually keep one handy) and ran the following sequence:
[jimd@antares lgaz]$ ls hello.C [jimd@antares lgaz]$ cat hello.C #include <iostream.h> int main( int argc, char * argv[] ) { cout << "hello world\n"; return(0); } [jimd@antares lgaz]$ make hello g++ hello.C -o hello [jimd@antares lgaz]$ ./hello hello world [jimd@antares lgaz]$... which I then simply pasted into this buffer.
Note that I use the make command here. A nice feature of 'make' (at least the GNU make) is that it can make some guess about what you mean even if you don't supply it with a Makefile. So my command make hello forces make to look for a .c, .C or .cpp file to compile and link. If it sees a .o file it will try to link it with cc -- but for a C++ file you need to link it with g++.
A nice side effect of using make this way is that I don't have to specify the -o (output name) and I don't end up with a file named a.out. It "makes" a program named hello.
So the source of your problem is probably that you are compiling your program with gcc in a way that confuses it -- and tries to link it as a C program rather than a C++ program. If you call gcc under the link 'g++' (just another name for it) you'll see the whole think work. The compiler pays attention to how you called it (the value of its argv[0]) and makes assumptions based on that.
Of course I can't verify that the errors I got were the same as the ones that you see -- since you didn't capture them into your message. In any event using make hello works -- using g++ hello.C works -- using gcc hello.C doesn't link properly and complains about unreferenced stuff and using gcc or g++ with the -c gives me an object file (hello.o) which is, for our purposes, useless.
A better venue to ask questions about compiling under Linux might be the Linux programmers list (as I mentioned earlier) or in any of several comp.lang.c and comp.lang.c++ newsgroups (since there is nothing Linux specific about this).
If you consider it a bug that gcc recognizes the capital C for C++ when generating .o files and doesn't automagically link with the appropriate libraries in the next pass -- take it up with the participants of the gnu.gcc.* or the gnu.g++.* newsgroups. (There's probably a very good reason for this behaviour -- though I confess that I don't see it).
-- Jim
To: Toby Riley toby@handc.btinternet
James,
I have been reading your page with great interest but I can't find
anything about removing LILO and restoring My MBR. Unfortunately I have
to de-install Linux for a while. I have tried running lilo -u and lilo
-U and when the PC reboots I just get LI and the system hangs.
Personally I've never heard of a -u switch to lilo.
Normally you have to replace your lilo MBR with some other valid MBR. Most people who are disabling Linux on a system are restoring access to an existing set of DOS partitions -- so using the DOS MBR is in order.
To do that -- boot from a DOS floppy -- and run FDISK /MBR This should exit silently (no error and no report of success). The /MBR switch was added, undocumented, to version 5.0 of MS-DOS. It won't work with previous versions.
I can boot Linux off a floppy and the re-run LILO which adds my boot options and restore my system to a usable state. But I can't get rid of it and restore the Win95 boot up.
Under the hood Win '95 is MS-DOS 7.0 -- just run FDISK /MBR.
We eagerly await your return to the land of Linux.
-- Jim
From: RHS Linux User 6ng1@qlink.queensu.ca
hello answer guy!
Problem: Printing text / postscript documents.
Printing graphics (using xv) works fine, after having my printcap file set up for me, using apsfilter. I own a kyocera f-3010 and this printer can emulate an HP LaserJet Ser II. However, printing documents is a completely different story. Trying to print from, say, Netscape or LyX gets a printout of two or three "step ladder" lines, the output usually being something like "/invalid font in findfont . cannot find font Roman ... etc". Looks like it is not finding the appropriate ghostscript fonts. Is there any way to ensure that ghostscript can recognize my fonts (using xfontsel shows all my installed fonts)? Would you know how to rectify this problem?
Like X Windows, printing is a great mystery to me. I managed to get mine working -- including TeX with dvips (on my DeskJet 500C) -- but I still don't know quite how.
xv works and Netscape and LyX don't. Can you print a .dvi file using dvips? Can you print a Postscript file using lpr? How about mpage? Does that work?
The stairstep effect is common when raw Unix text is going to a printer that's expecting MS-DOS CRLF's (carriage return, linefeed pairs). That makes it sound as though the other applications are bypassing the filter in your /etc/printcap file (or that xv is somehow invoking the write filter before passing the directly to the printer).
Thanks a million for your help, this is something that has been bothering me for a while now.
Yeah. I let printing bother me for about a year before I finally forced it to print something other than raw (MS-DOS style) text.
You have gone through the Printing-HOWTO's haven't you?
-- Jim
From: Andrew Ng lulu@asiaonline.net
Dear Sir,
I have a question to ask: Does Linux support disks with density
2048bytes/sector?
Linux currently doesn't support normal disk with large block sizes. (CD-ROM's have large block sizes -- but this is a special case in the code).
It is likely that support for larger block sizes will eventually be added to the kernel -- but I don't think it will be in before 2.2 (not that I actually have an inside track on if or when anything is going to happen in kernel development land -- that's just my guess).
I have bought a Fujitsu MO drive which support up to 640MB MO disks with density 2048bytes/sector. The Slackware Linux system does not support access to disks with this density. Windows 95 and NT support this density and work very well. Is there any version of Linux which support 2048bytes/sector? If not, is there any project working on that?
Someone from Fujitsu's support team called me back on this (as I'd copied an earlier message to their webmaster).
The report was that the smaller 540Mb MO media are supported with no problem -- but that the high density media with the large block sizes weren't supported. If I recall correctly he said that this doesn't work for any of the other versions of Unix that Fujitsu knows of (with their drive).
-- Jim
From: Sean McCleary sean@cdsnet.net
Anyhow, here's my problem:
I recently renamed my system in my /etc/HOSTNAME file. Ever since I
made that change, my system's telnet daemon has stopped allowing incoming
connects from ANYWHERE. I was told this has to do with my recent
system-renaming, but the man who I was talking to about it never told me
WHY or how to fix it.
I've checked my /etc/hosts.allow and /etc/hosts.deny.
These two files control the behavior of tcpd (the TCP Wrappers program by Wietse Venema).
You might also want to look at your /etc/hosts file. This file is used by most system resolver libraries in preference to DNS.
The resolver libraries are the code that allows client programs on your system to translate domain/host names into IP addresses. There are several schemes for doing this -- which can be set in different priorities for each host.
The oldest method for performing this resolution was a simple lookup in the local /etc/hosts file (there was also an /etc/networks file back then -- you don't see them very often now). This is still common for small networks (less than about 25 systems).
The most widely used method is DNS (also know as BIND -- Berkeley Internet Naming Daemon -- a.k.a. 'named'). Actually DNS is the protocol and BIND is the commonly available server software.
Another fairly widespread naming service is NIS and its causing NIS+. These were both created by Sun Microsystems and published as open specifications. This system was originally known as "Yellow Pages" -- and many of the commands for managing the service still have the prefix "yp" (i.e. 'ypcat'). However a company (British Telecom if I recall correctly) objected to the trademark infringement and Sun was forced to change the designation.
NIS and NIS+ are designed to distribute more than host and network name resolutions -- they are primarily used to manage accounts across whole domains (networks) of hosts. This is especially important among systems that are using NFS since that usually requires that you maintain synchronized UID across the enterprise. (The normal NFS behavior is to grant file access based on the effective UID of the user on the client system -- this can be overwritten in a cumbersome fashion -- but most sites simply synchronize the UID's -- usually via NIS or by using rdist and distributing whole /etc/passwd files).
Under Linux there is a file named /etc/host.conf (note: SINGULAR "host"). This sets the priorities of the resolver libraries -- which is typically something like:
order files bind nisplus nis multi on(look in the /etc/hosts and /etc/networks first -- then try DNS -- then NIS+ and finally NIS -- try multiple resolutions).
Why is this happening, Answer Man?
I don't know. Did you look at a tail /var/log/messages for clues? Are you sure that this is a problem with your host's name? Did you change to shadow passwords around the same time?
One way to get more clues about any failure you get from any service in the inetd.conf file is to replace the service's entry temporarily with a command like:
## telnet stream tcp nowait root /usr/sbin/tcpd in.telnetd telnet stream tcp nowait root /usr/sbin/tcpd /usr/sbin/strace \ -o /root/tmp/telnet.strace /usr/sbin/in.telnetdHere I've commented out the old telnetd line and put in one that keeps a system call trace file. Looking at this file can give some clues about what the program was trying to do up until it disconnected you.
I'll grant that you need to know something about programming to make any use of this file. However you probably don't need to know as much as you'd think. That start to make a little sense after you run a few dozen of them -- particularly if you have a "working" and a "broken" configuration to run your tests with.
-- Jim
From: Jon Jacob xaviermoon@earthlink.net
I am trying to configure X. I have a Config file set to the SVGA generic using the XF86Config.eg file that comes with the Slackware96 distribution.
I have a Sony Multiscan15sf with a ATI mach64 PCI video care with 1 meg of VRAM. When I run startx, the monitor locks so that it turns to black but it still is getting a signal from the PC because the PowerSaving light stays green.
I tried fiddling with the Config file with no change. I ran the startx to redirect to an out file to see the error message, but I just get the same file I got when I ran x -probeonly.
I could not find a drive for an ATI Mach64 PCI card that matches mine. Do I need one? If so, where would I get it? Can I use some generic driver?
Also, Ramdoc was shown by the probe to be "unknown" so I left it commented out in the Config file. Could this be the problem?
I am very frustrate after hours and hours of attempts. Please help!
I keep trying to tell people: I barely use X. X Windows configuration is still a mysterious "black art" to me that requires that I have the system in front of me to do my hand waving in person.
I think you should search the X Windows HOWTO file for the strings "ATI" an "Mach." I'm pretty sure you need a special server for the Mach 64's and I wouldn't be at all surprised if it was one of those deviants that doesn't work with a generic SVGA driver.
The first time I ever got X running I resorted to IRC (Internet Relay Chat) -- where I joined the #Linux channel and hung out for awhile. After watching the usual banter for about 20 minutes and adding a few (hopefully intelligent) comments to the discussions at hand I timidly asked for some help. Some kind soul (I don't remember the nickname) asked for some info -- show me how to do a /dcc (direction communications connection?) to send the file to him -- edited my XConfig file and sent it back.
One of the beauties of Linux is that I was able to test a couple of revisions of this file while maintaining my connection. Naturally, I glanced over the file before using it. If you decide to take this approach I recommend that you avoid any binaries or source code that you don't understand that someone offers to xfer to you. You will be running this as 'root' on your system.
A config file with which you are moderately familiar is a bit safer -- though you could always end up with some weird trojan in that, too.
This is not to suggest that IRC has a higher percentage of crackers and "black hats" than anywhere else on the net -- just trying to emphasize that you have no way of identifying who you were working with -- and all it takes is one.
Another approach you might try is to call ATI and let them know what you want. As more of us use Linux and demand support for it the various hardware companies will have their choices -- meet market demands or lose marketshare.
If you decide to take this to the news groups be sure to go for comp.os.linux.x -- rather than one of the more general newsgroups. It is a little frustrating that so many X questions end up in the various other Linux news groups -- X Windows for Linux is no different than X Windows for any other x86 Unix. However I've never seen an XFree86 newsgroup so...
-- Jim
From: Romeo Chua rchau@st.nepean.uws.edu.au
Hi! I would like to know if I can use the JDK 1.1.2 for Solaris x86 on Linux. Does the iBCS2 module support Solaris x86 applications?
Last I heard a native JDK was already available for Linux (although that might be 1.1.1).
I have no idea whether SunSoft has maintained any compliance to iBCS in the Java stuff for Solaris.
-- Jim
From: Kevin T. Nemec knemec@mines.edu
Dear Answer Guy,
I was wondering if it is possible to force a program to use its own
colormap externally. That is, can you force a program without a built in
option to use its own colormap to do so in some other way. I don't mind
the "flashing" in some applications as long as I can see all the colors.
Kevin Nemec
I've heard that xnest can be used to run one X session inside of another. I don't know if this would help. I've used XFree86's support for multiple virtual consoles to run two X Windows sessions concurrently (using {Ctrl}+{Alt}+{Fx} to switch between them, of course). These can be run with different settings (such as 8bpp on one session and 16pbb on the other.
Other than that I have no idea. I keep trying to tell people I'm a *Linux* guy -- NOT an XFree86 guy. I run X Windows to do the occasional XPaint or XFig drawing, to run Netscape on sites that are just too ugly to tolerate in Lynx, and (recently) to play with xdvi and ghostview (to preview my TeX and PostScript pages).
So, anyone out there that would like an XFree86 Answers Column in Linux Gazette (or anywhere else preferably under LDP GPL) has my utmost support. (Although our esteemed editor, Marjorie Richardson will certainly make the decisions).
-- Jim
From: Paul L Daniels jdaniels@stocks.co.za
With respect to a question that was in "The Answers Guy" re LILO only presenting "LI" on the screen then _hanging_.
I found that problem too... the problem (at least for me) was that I was including a DOS partition in the LILO.conf file. After removing the partition manually, running liloconfig and reinstalling from current lilo image, everything worked.
If you were including a DOS partition in your lilo.conf file with some syntactic errors (making it look like a Linux partition perhaps) or if your previous edit of the file had not be followed by run /sbin/lilo (the "compiler" for the /etc/lilo.conf file) -- I would expect you to have problems.
However it is quite common to include one or several DOS partitions in a lilo.conf file. That is the major purpose of the LILO package -- to provide multiple boot capabilities.
If this is all babble and drivel, then ignore it, I wasn't sure who to post to.
I suspect that there was something else involved in the "stanza" (clause, group of lines) that you removed from your conf file. Since you've solved the problem it sounds like little would be gained from attempts to recreate it -- or to guess at what that had been.
-- Jim
From: Sean sdonovan@hq.si.net
Sorry if I am one of hundreds w/ this kinda question./....but try to answer if you have time..
So I had linux loaded up and working fine was even able to make my dos/95 partition work ok too. So then I actually loaded the 95 gui {it had just been a sys c: to get a bootable dos/95 since I didn't have the 95 files for the gui at the time}
So now all I can get is 95...I tried the primitive fdisk thing thats part of the do you want to install linux again deal w/ the two disks also tried making different partitions active w/ fdisk as well...but no workie workie. I can boot w/ the two disks that are part of the linux install use the rescue option and then mount the hd linux partition to a directory of my choice and if I try to run lilo from their {since its not in /sbin/lilo on the floppies} it moans about lilo.conf not around and /boot/boot.b not present and such sooo I try to recreate that structure on the root {ramdisk:?} or floppy or whatever I am running everything from...run out of diskspace trying to copy hda files from now mounted hd to /dev of ram/floppy. So I'm stuck...Any ideas? I have read all relevant faq's/scanned every apparently related how-to's etc... to no avail...maybe its like you said on your page; maybe I'm not really running a "boot" floppy... help if ya can, My lilo.conf was reliably letting me into command line dos/95 and linux/xwindows etc.. system is an IBM thinkpad 760el if that's relevant.
The short story is that you don't know how to run /sbin/lilo.conf from a boot floppy (rescue situation).
There are two methods. One is to use the chroot command:
Basically after you boot you mount your root file system (and your usr if you have that separate) -- something like so:
mount /dev/sda5 /mnt/ mount /dev/sdb1 /mnt/usr
(Here I'm using the example of an extended partition on the first SCSI drive for my normal root partition and the first partition on my second SCSI drive as my usual usr partition -- change those as necessary).
You can (naturally) create a different directory other than /mnt/ or under /mnt and mount your filesystem under that.
Now you cd to that:
cd /mnt/And run the chroot command -- which takes two parameters: where to make the new root of your session's filesystem and what program to run in that "jail"
chroot /mnt/ /mnt/bin/bash
Here we're running the copy of bash that's under our chroot environment. Thus this session, and all processes started by it now see /mnt as /.
This was the original use of the chroot call -- to allow one to work with a subset of your filesystem *as though* it were the whole thing (handy for developers and doing certain types of testing and debugging -- without risking changes to the whole system).
Now should be able to vi /etc/lilo.conf and run /sbin/lilo to "compile" that into a proper boot block and set of mappings. (note the "/etc/" and "/sbin/" will be really /mnt/etc and /mnt/sbin -- to the system and to any other processes -- but they will *look like* /etc/ and /sbin/ to you).
The other approach is to create a proper (though temporary) lilo.conf (any path to it is fine) and edit in the paths that apply to your boot context. Then you run /sbin/lilo with the -C file to point it at a non-default lilo.conf (which can be named anything you like at that point.
The trick here is to edit the paths in properly. Here's the lilo.conf for my system (antares.starshine.org):
boot=/dev/hda map=/boot/map install=/boot/boot.b prompt timeout=50 other=/dev/hda1 label=dos table=/dev/hda image=/vmlinuz label=linux root=/dev/sda5 read-only
Here's how I have to edit it to run lilo -C when I'm booted from floppy and have mounted my root and usr as I described above (on /mnt and /mnt/usr respectively):
boot=/dev/hda map=/mnt/boot/map # current (emerg) path to map install=/mnt/boot/boot.b # current (emerg) path to /boot prompt timeout=50 other=/dev/hda1 label=dos table=/dev/hda image=/mnt/vmlinuz # path to my kernel label=linux root=/dev/sda5 read-only
Note that I've added comments to the end of each line that I changed. (I think I got them all write -- I don't feel like rebooting to test this for you). The specifics aren't as important as the idea:
The lilo program (/sbin/lilo) "compiles" a boot block from information in a configuration file -- which defaults to /etc/lilo.conf.
References to directories and file in the .conf file must be relative to the situation *when the /sbin/lilo is run*. References to devices and partitions typically don't change in this situation.
I hope that helps. It is admittedly one of the most confusing aspect of Linux to Unix newbies and professionals alike. In some ways I prefer FreeBSD's boot loader (the interactive and visual debug modes are neat -- you can disable various drivers and view/tweak various kernel settings during the boot). In other ways I prefer LOADLIN (which can load Linux or FreeBSD kernels from a DOS command prompt or from a DOS CONFIG.SYS file). In yet other ways I like the OpenBoot (forth interpreter and system debugger) used by SPARC's.
I would like to see PC's move to the OpenBoot standard -- it's SUPPOSED to be part of the PCI spec. Basically this works by replace the processor specific machine code instructions in device ROM's (for video cards and other adapters) with FCode (byte compiled forth). The system (mother) board then only has to implement a forth interpreter (between 8 and 32K of footprint -- much smaller than existing BIOS chips).
The advantage is that it allows your adapters to be used on systems regardless of the processor. Forth is a very efficient language -- as close to machine language as an interpreter can get -- and closer than many assemblers (some of which generate stray code).
Too bad there are no PC manufacturers who understand this YET!
From: Sean sdonovan@hq.si.net
Thank you from the bottom of my heart for your informative and very useful email. It took about 50 seconds using the chroot command {see I learned something new today :-) } I am back up...worked like a charm... I'll try not to bother you in the future but if I ever need to blow the horn at time of utmost need... It's pretty cool when stuff works, what is frustrating as heck is when you can't find the answers, I really did try reading the faq's/how to's and so on... You are right about the email coherency, need to work on that, guess I figured to a hack like yourself it would make sense {all the stuff that I had tried} and I wasn't sure you would actually write back.}
I'm doing this from minicom so everything workie workie :-)
When you have time; why did another friend {not in your league
apparently} suggest:
linux root=/dev/hda2 ro
thanks again,
From: John Messina John.Messina@astramerck.com
My dad just gave me his old 386 machine. It's not much, but I wanted
to start experimenting with it and to try to use it as a firewall.
I upgraded it to 8MB of RAM and dropped in an ISA Ethernet card -
just the bare minimum. I'm attempting to install RedHat 4.1 onto this
machine. My main machine is already up and running with COL Standard
and since the 386 has no CD-ROM, I attempted to do an NFS install.
he NFS part of the install works perfectly (nameserver, exports,
etc. on my main machine is configured correctly and autoprobe can find
the 386's ethernet card). The problem occurs when the install starts
to look for the 386's SCSI card. The 386 has a Seagate ST01/02 SCSI
card with one hard drive attached. The ST01/02 is supported by the
install, but autoprobe cannot find the card and I've tried all of the
combinations for the parameters that are listed - checked the RedHat,
CND, and COL manuals. No IRQ/Base address combination that I've tried
works. I've looked at the board itself, but can't tell how it's set up.
I guess my question comes down to the following:
Is there a way during the install to find out what the IRQ/Base
address for this board is? Or, since the machine will successfully
boot to DOS/Win3.1, is there a way to determine these settings from
the DOS/Windows environment?
There are a variety of "diagnostics" utilities for DOS
-- MSD (Microsoft) comes with some recent versions of DOS
and Windows, NDIAGS comes with recent versions of the
Norton Utilities, American Megratrends used to sell the
AMIDIAGS, and there used to be some others called
Checkit! and System Sleuth. There are also a large
number of DOS shareware and freeware programs which perform
different subsets of the job.
Another program that might list the information you're looking
for is Quarterdeck's "Manifest" which used to be included
with QEMM since about version 7 or 6 and with DESQview/386
(one of my all-time favorite DOS programs -- with features I
still miss in Linux!).
The system I'm typing this on is an old home built 386.
It is the main server for the house (the clients are Pentia
and 486's -- mostly laptops). So you don't have to "apologize"
about the age of your equipment. One of the real virtues of
Linux is that it breathes new life into old 386's that have been
abandoned by the major software vendors.
One approach to consider it to find a good SCSI card. I
realize that you'll spend more on that than you did on the
computer -- but it may be worth it nonetheless. Over the
years I've upgraded this system (antares) from 4Mb of RAM
to 32Mb and added:
Adaptec 1452C controller,
one internal 2Gb SCSI,
and a 538Mb internal,
a 300Mb magneto optical drive,
a 4mm DAT autochanger,
an 8x CDROM,
a Ricoh CD burner/recorder,
and an external 2Gb drive
(that fills out the SCSI chain --
with additional drives including a Zip
on the shelf)
upgraded the old 200Mb IDE hard drive to a pair of
400 Mb IDE's,
upgraded the I/O and IDE controller to one with
four serial ports (one modem, one mouse, two terminals --
one in the living room the other in the office),
and a 2Mb STB Nitro video card.
My point is that you can take some of the money you save
and invest in additional hardware. You just want to ensure
that the peripherals and expansions will be useful in your
future systems. (At this point memory is changing enough
that you don't want to invest much in RAM for your 386 --
you probably won't be able to use it in any future systems) --
bumping it up to 16Mb is probably a good idea -- more only if
it's offered to you for REAL cheap.
Other than than I'd do an Alta-Vista search (at Yahoo!)
for Seagate ST01/02 (ST01, ST02, ST0). My one experience
with the ST01 is that it was a very low quality SCSI card
and not suitable for serious use. I'd also search the
"forsale" newsgroups and ads for a used BusLogic (you might
find one for $10 to $25 bucks -- don't pay more than $50
for a used one -- low end new cards can be had for $60).
--
Jim
From: Vaughn (Manta Ray) Jardine vaughn@fm1.wow.net
I Use a multiconfig to boot either to Dos, Win95, or Linux (Redhat 4.1).
I use loadlin from the autoexec.bat to load the linux kernel, however I
recently accidently deleted the dir with loadlin and the vmlinuz.
Ooops! I hate it when that happens!
I made a boot disk on installation so I use that to get to Linux. I
copied the vmlinuz from the /boot dir and put it on my Dos partition.
Now I don't have the original loadlin so I took one from a redhat 4.2
site on the net. It still won't boot. It starts and halfway through
bootup it stops.
Do I have to get the loadlin that came with redhat 4.1? What am I doing
wrong. It boots fine off the boot disk.
Vaughn
I'd want to find out why the LOADLIN is failing.
The old version of LOADLIN that I'm used to did require
that you create a map of the "real BIOS vectors" -- which
is done by allowing REALBIOS.EXE to create a boot disk,
booting off of that, and then re-running REALBIOS.EXE.
This file would be a "hidden + system" file in C:\REALBIOS.INT
The idea of this file is to allow LOADLIN to "unhook" all
of the software that's redirected BIOS interrupts (trap vectors
-- sort of like a table of pointers hardware event signal handlers)
to their own code. To do this you must have a map of where
each interrupt was pointed before any software hooked into it
(thus the boot disk). This boot disk doesn't boot any OS --
it just runs a very short block of code to capture the table
and save it to floppy -- and displays some instructions.
You may have to re-run REALBIOS.EXE (generate a new BIOS
map) any time you change your hardware. This is particularly
true when changing video cards or adding removing or changing
a SCSI adapter.
Obviously the version of LOADLIN that's used by Red Hat's
"turbo Linux" and by the CD based installed program of other
Linux distributions doesn't require this -- though I don't know
quite how they get around it.
So, try installing the rest of the LOADLIN package and running
REALBIOS.EXE. Then make sure you are booting into "safe"
DOS mode under Win '95. I'd also consider putting a block
(like a lilo.conf stanza) in your CONFIG.SYS which invokes
LOADLIN.EXE via your SHELL= directive. That block should have
any DEVICE= or INSTALL= directives except those that are needed
to see the device where your LOADLIN.EXE and kernel image file
are located. This should ensure that you aren't loading
conflicting drivers. There are details about this in the
LOADLIN documentation.
--
Jim
From: Ken Ken@KenAndTed.com
Hi... I'm having some trouble, and maybe you could help??
I recently went from kernel 2.0.27 to 2.0.3. Of course, =) I used Red Hat's
RPM system (I have PowerTools 4.1) and upgraded. After the config,
compile (zImage), and modules stuff, I changed LiLo's config, to have
old be my backed up kernel of 2.0.27, and linux be the new one. Then,
I did a zlilo, and everything ran smoothly.
I presume you mean that you installed the 2.0.30 sources
and that you did a make zlilo (after your make config;
make dep; and make clean)
But now, one the new kernel, after it finds my CD-ROM drive, it won't
mount my root fs. It gives me a kernel panic, and says unable to mount
root fs, then gives me the address 3:41. What's going on??
I've tried recompiling and remaking lilo many times. (oh yeah... I didn't
forget dep or clean either) Nothing works. I'm using the extended 2
fs, and it's built right in the kernel...
Did you do a 'make modules' and 'make modules_install'?
If you do a 'diff' between /usr/src/linux/.config and
/usr/src/linux-2.0.27/.config what you you see?
Are you sure you need features from the 2.0.30 release?
You may want to stick with 2.0.29 until a 2.0.31 or 32
goes out. I know of at least one problem that's forced
my to revert for one of my customers*.
It has always been the case with Linux and with other
systems that you should avoid upgrading unless you know
exactly what problem you're trying to solve and have some
understanding of the risks your are taking. That's why it's
so important to make backups prior to upgrades and new software
installations. I will note that my experience with Linux
and FreeBSD has been vastly less traumatic in these regards than
the years of DOS and Windows experience I gained before I
taught myself Unix.
* (using the -r "redirect" switch of the ipfwadm command to
redirect activity on one socket to another works through
2.0.29 and dies in 2.0.30 -- and gets fixed again in a "pre31"
that one of my associates provided to me).
Here's my lilo config file...
That looks fine.
I suspect there's some difference between your kernel
configurations that's at fault here. Run diff on them
(the files are named .config in the toplevel source
directory). or pull up the 'make menuconfig' for each
and place them "side-by-side" (using X or on different
VC's).
Hint: You can edit /usr/src/linux/scripts/Menuconfig
and set the single_menu_mode=TRUE (read the comments in
the file) before you do your make menuconfig -- and you'll
save a lot of keystrokes.
Maybe you need one of those IDE chipset boxes checked.
My hard drive that boots is hda, and my Linux drive is hdb. I took out
read-only a while ago, to try to solve the problem. It made no difference.
It'd be great if you could help me out a little. Thanks, Ken...
Answer Guy #1, January 1997
Sean
ST01/02 SCSI Card
Booting Linux
Kernel Panics on root fs
...[ellided]...
Previous "Answer Guy" Columns
Answer Guy #2, February 1997
Answer Guy #3, March 1997
Answer Guy #4, April 1997
Answer Guy #5, May 1997
Answer Guy #6, June 1997
Answer Guy #7, July 1997
Copyright © 1997, James T. Dennis
Published in Issue 20 of the Linux Gazette August 1997