...making Linux just a little more fun!
We have guidelines for asking and answering questions. Linux questions only, please.
We make no guarantees about answers, but you can be anonymous on request.
See also: The Answer Gang's Knowledge Base and the LG Search Engine
Welcome to a summer day among The Answer Gang. We're having a quiet little picnic here... hmm, perhaps a little too quiet. Nonetheless we've got some juicy Answers for you to enjoy reading.
If you've got a good juicy question I encourage you to please email it to The Answer Gang (email@example.com). I mean, sure, we have these nice guidelines we'd like you to try out first - but we welcome the stumpers. There's a lot more distros than there used to be and even we learn something from our fellow Gang members occasionally.
As the question of how big business takes Linux to heart is now taken a bit more seriously than in past years, we'd like to encourage corporate types to ask us their tricky questions too. We can't promise the speediest response time (although many have been pleasantly surprised) or that we really will answer (although we have quite a few people now - your chances are good). If you want to be anonymous we can take it a step further and try to sanitize things like IP addresses when you describe your network... feel free to go into detail, or to sanitize your notes yourself (encouraged). If you've got one of those "this may be confidential" notes, make sure you've explicitly granted us permission to publish the thread worldwide.
"The enterprise" is such an incredibly vague buzzword these days I'm surprised Viacom/Paramount doesn't get mad about the press abusing it. Of course they're the ones who named their now famous line of starships after a verb that we've turned into this planet's most abused group noun. But let's take a serious look at the question, shall we?
What draws the line between simply a decent sized company and an "enterprise"? Multiple sites, sure. Is a family chain of restaurants an "enterprise" then? Maybe. Divisions and departments and layers of management who have never met each other, because the heirarchy has grown over time to handle so many groups of people? Yeah, definitely getting there. So we need project planning software. OpenOffice serves charting needs and presentation, plus the usual word processing. How about planning large networks? Some of the logic I've seen for keeping departments out of each others business via internal firewalling ... defies all logic but slicing through the Gordian knot and redesigning the layout. There's social questions too (what, you think internal policies grow on trees? Cut down quite a few, maybe) and development plans that may span 5 or 6 years.
Oops, 6 years ago we weren't nearly so impressive. I think that some companies will only see Linux creep into units as their plans turn out to be met by it. So for a long while to come, it's going to be very important how we handle working with multiple platforms - whether they're the Borg of Redmond, or Sun, or Macintoshes. That means coughing up schematics and documents that their software will handle too - or making sure that our open source stuff runs on a world of other systems. The latter is a better answer for the long term - applying new logic of ergonomics and workplace expectations into the results - but sucks for the short term, because units don't necessarily want to fire all their software - or risk being unable to work on the same documents as other divisions do. Or their partner companies in a consortium. Which came first, the chicken or the egg? Something that's not quite a chicken, generally waits for the chicken to "lay an egg" in the idiomatic sense: Linux or another free solution will be tried when an expensive one is either too painfully expensive to contemplate first, or flops so horribly that departments stray from the golden path of Fully Paid Support Contracts to get something that Just Works.
And, as my husband has discovered in small business of years gone by, there will be times that when such solutions work they will be left to stay and serve. In fact it will be insisted upon, as department heads change and the server continues to operate a given system will retain its traditional bearing... and it will be just as hard for a $DISTRO driven company to switch to $DISTRO2 if the first does not serve them optimally - because there will be years of unit dependence on features that work "just so" on $DISTRO. This is the real thing any new distro - or in fact any distro which seeks to move people around the new "enterprise" linux users over to their userbase - has to make easy, or it will be an uphill battle every step of the way.
We already know that at least some of "the enterprise" is ready for Linux... here and there. Is "the enterprise" ready for the change in paradigm to go with it, of how our software changes and open source projects flow into and out of the equation? We don't know. Are the brand name distros up to the combined challenge of having their toes nibbled by specialty distributions (see LWN's Distributions page) at the same time as trying to play both the "Try us, we're different" and "no problem, we're the same" cards in the battlefield... err, marketplace, yeah, that's what they call it.
Speaking of the battlefield, in my country, the last Monday in May was Memorial Day, when we honor our war veterans of all varieties. So I'd like you to stop, and consider those you know, or who your families know, who have fought great battles and won... especially those who won at the cost of their lives or livelihood, and also of those who fought for the lives and livelihood of people they never knew or understood.
Thanks for reading. Have a great summer. See you next month.
From Chris Gibbs
I think this is more a Microslop question, but maybe you can help.
I have 2 PC's on 10baseT network, normally both run linux and everything is fine. I have a fairly paranoid setup where hawklord.universe.com is 192.168.0.1 and cannot ftp gigahawk.universe.com. But hawklord can http gigahawk ok. (confession... my modem is ISA, hawklord has ISA slots, gigahawk does not... so hawklord is just a box I can ssh to and run Netscape from, its also where I keep documentation on an Apache server, so the ability to http hawklord would be good)
[Faber] I didn't quite follow this. I think you're saying that everything works the way you want it to, right?
And are these names related to that Saturday morning cartoon where all the heroes had wings? I think one of them was called Hawkman.
gigahawk (192.168.0.2) can ftp hawklord, http hawklord whatever. Security don't matter at all for hawklord, I just assume its insecure.
If I boot Windoze ME on gigahawk I just can't find hawklord. ping just times out.
[Faber] Oh, that's easy enough to fix. Don't boot into Windows ME! <bah-da dump> <rimshot>
So like er, how do I get MS ping to find the linux box? Everything on hawklord works fine.
[Faber] You can ping hawklord by IP address, right? Go no further until you do can that. Run winipcfg to make sure it has the IP Address/subnet mask you think it does. If you can ping hawklord by the IP Address (NOT the name!), then you may read on.
[Ben] If you can't find "winipcfg", try "ipconfig" from the CLI. There are several versions of Wind0ws that don't have the GUI version.
People complain Linux is hard to configure but is (at least for me) simplistic compared to Wintendo. I've found places in Windoze to put DNS numbers, what I can't find is hosts.allow;
And you won't. What you're looking for it the /etc/hosts file. hosts.allow is used only for, IIRC, tcp-wrappers.
[Ben] BZZT. It's just a host access control mechanism, not dependent on TCP wrappers AFAIK (although you can do some interesting additional stuff if you have it; see "man hosts.allow".)
Well, actually, hosts.allow and hosts.deny are used by tcpd and other programs compiled against libwrap (the TCP Wrappers libraries) which include things like the Linux portmapper (used by NFS, and other ONC RPC services).
So you're sort of both right, depending on what you mean by "TCP Wrappers" (the binary /usr/sbin/tcpd, or the library, libwrap.so against which it's linked).
[Faber] The file you want is in $(WINDIR)/System32/etc/hosts.sam (I'm pretty sure that's where it is. At worst, search for "etc"). You need to populate it and rename it to simply "hosts".
[Ben] "hosts" does not have the functionality of "hosts.allow" or "hosts.deny"; it just does IP to-hostname-mapping. Chris is right: there's no equivalent file in Wind0ws - although you can get the functionality in other ways (not that I'm an expert on obsolete OSes, but I've had to set up a few mixed networks.)
[Faber] You will also see a "lmhosts.sam"; don't bother with that unless you have Samba running on hawklord. And if you're going to play with Samba and lmhosts, be sure to read up on MS netbios technology; that oughtta make you not want to do it.
[JimD] If you can't ping it by IP address, and it's on your LAN; that
suggests an ARP problem on one side or the other. Try arp -d $IPADDR on the Linux side of things. Then try running tcpdump -e -v to watch the ARPs and other traffic between the two. The -e will force tcpdump to print MAC addressing info on each dataframe it captures --- so you can spot if some other ethernet card is responding to the ARP requests. Of course you can use ethereal or "tethereal" (the text mode of ethereal) in lieu of tcpdump if you prefer.
BTW, there's a really good intro to reading what I think of as "libpcap syntax" - the stuff that's put out by tcpdump, ethereal, etc., by Karen Kent Frederick at SecurityFocus. In fact, it's a four-part series:
"Studying Normal Traffic":
<http://www.securityfocus.com/infocus/1221/> <http://www.securityfocus.com/infocus/1222/> <http://www.securityfocus.com/infocus/1223/>
Ok I tried winipcfg and I think it gives the clue cause there is a tick in the NetBIOS Resolution Uses DNS checkbox. Apart from that its what I expect. ping 192.168.0.1 continues to time out.
[Faber] Since you're pinging the IP address, name resolution (DNS, /etc/hosts, etc.) doesn't work into it. (But does Windows try to do a NetBIOS name resolution with the IP Address? Hmm...)
If you can't ping using the IP address, something is screwed up on your network, either the IP address (the other box isn't on the 192.168.[something other than 0] network, is it?), the subnet mask is wrong, or the Windows box isn't configured for networks.
Did you try Jim's suggestion about ARP? That information would be useful.
Does that mean I must set up a name server on hawklord? Also I'm confused about bindings seems I must check client for MS networks or printer sharing else I don't get anything. I don't really seem able to alter anything (situation normal for me in Microkak)
[Faber] Get it to ping ith the IP Address, then we'll worry about name servers (but in general, no you don't have to set up a name server).
You do have TCP/IP installed on the Windows box, yes? "Client for MS networks" enables SMB/NEtBIOS stuff. PRinter sharing uses the same stuff; I don't know why they're separate.
[David] Silly idea, try having the MS boxen ping itself. Have seen times that the MS boxen was so confused that it could not ping itself let alone someone else. It took a reboot, removal of all networking, reboot, reinstall networking, reboot and finally it would ping itself and low and behold it could ping the rest of the network too.
[Ben] I'm with David on this one, and will confirm the above as standard behavior (I've seen it a number of times), although I think of it in different terms:
ping 127.1 # Test the local TCP/IP stack ping localhost # Test the local hosts file ping xxx.xxx.xxx.xxx # Test "outside" - ping the immediate upstream IP ping foo.bar # Test all by pinging an "outside" host by name
Finding out where this breaks is the first key to troubleshooting this kind of problems, whatever the OS.
From Becca Putman
In your webpage, you said, "If you can't access your tape drive, try loading the st.o module."
I'm very new at this, so please bear with me... how do I load that module? I did a simple installation of RedHat 9(Shrike). When I installed my Adaptec aha-2940 card, RH saw it immediately. It also sees my tape drive (a DEC TZ87 - supposed to be the same as a Quantum DLT2000), but it doesn't load a driver for it. Suggestions?
[Faber] Are you sure RH hasn't loaded a driver for you? Sounds like it did. Why do you say it didn't load the module?
Anywho, you can look at the list of loaded modules with 'lsmod' to see if it is loaded. To load a module, you can type "modprobe st" and the system will load the st.o modules and any dependencies.
I created the tape with high density and 6250 blocksize. However, restore is complaining about a tape read error on first record. If I take out the blocksize argument, it says:
[root@tara tape]# restore -x -v -f /dev/st0 * Verify tape and initialize maps Input is from tape restore: Tape block size (80) is not a multiple of dump block size (1024)
[K.-H] /dev/st0 rewinds automatically on closing of the filehandle. /dev/nst0 is the no-rewind version which will not rewind the tape automatically
This is valid for all commands using the /dev/[n]st0 icluding mt
[Faber] Isn't this saying that you should be using 80 instead of 6250?
[Ben] Probably not. I suspect that what it's seeing is a header that it doesn't understand, which happens to have "80" in the position where the block size would normally go.
The tape was created with OpenVMS v6-something back in 1997. Please tell me there is some way to read it...? Pretty please? Pretty please with sugar on top?
[Faber] Can anyone help the lass? I can't remember the last time I did a tape retore let alone doing one from an different OS (waitaminnit! can you restore a VMS tape to a un*x/Linux box?).
[Ben] Erm, well... only if you wanted to make it into a VMS box. In short, no - at least as far as I know. You should be able to extract the tape contents, though - and I seem to remember that there's a VAX/VMS emulator available for Linux, so you might even be able to run what you extract.
I found a free product called vmsbackup, which will take a tape made with VMS and extract it to a unix (read, Linux) box. It can be found at http://vms.process.com/ftp/vms-freeware/FREE-VMS, if anyone is interested.
Anyway, I've come to the realization that my tape has a bad block - right at the very front. sigh I tried to use mt to move the tape past it, but it appears that just before mt exits, it rewinds the tape. Real helpful.
From Ashwin N
I am facing a strange problem. ViM has a plugin that enables users to edit a compressed file just like a normal file. Say you open a file.txt.gz in ViM, it directly shows and allows you to edit the uncompressed test. But, strangely on my system this is working for .gz files but not working for .bz2 files!
The plugin file in question is gzip.vim (on my system it is in /usr/share/vim/vim61/plugin/gzip.vim). The file insides look OK to me, the right commands are being called for .Z, .gz and .bz2 files. But, when I open a text file compressed using bzip2 I get junk in ViM, whereas .gz files open correctly.
Hoping a solution/lead from you guys
[Kapil] It works here! What I have is:
Package: vim Version: 6.1 Debian version: 320+1
You didn't say what version etc. you have!
One possible problem that you may have is that your gzip.vim calls "bunzip2" rather than "bzip2 -d". The former may not exist in some broken installations of "bzip2".
Mine is ViM Version 6.1, from RedHat 8.0.
No, it uses "bzip2 -d". And both "bzip2 -d" and "bunzip2" work at the shell. I even changed "bzip2 -d" to "bunzip2" in the gzip.vim file, but it is still not working
This strange problem is really bugging me. I am lost wrt to the solution for this. Any other things I need to check?
[Jason] The 'gzip.vim' in /usr/share/vim/plugin has last change date as 2003 Apr 06
My version uses the '-d' flag and doesn't rely upon gunzip and bunzip2.
This is just a shot in the dark, but you might want to try list the autocommands in the 'gzip' group in vim, like this:
....which should dump a list that looks something like this:
--- Auto-Commands --- gzip BufRead *.gz call s:read("gzip -d") *.bz2 call s:read("bzip2 -d") *.Z call s:read("uncompress") gzip BufReadPre *.gz setlocal bin *.bz2 setlocal bin *.Z setlocal bin gzip BufWritePost *.gz call s:write("gzip") *.bz2 call s:write("bzip2") *.Z call s:write("compress -f")
This was where I got the clue to the problem.
When I did a ":au gzip" I got -
--- Auto-Commands --- gzip BufRead *.gz let ch_save = &ch|set ch=2 '[,']!gunzip set nobin let &ch = ch_save|unlet ch_save execute ":doautocmd BufReadPost " . expand("%:r") gzip BufReadPre *.gz set bin gzip BufWritePost *.gz !mv <afile> <afile>:r !gzip <afile>:r gzip FileAppendPost *.gz !mv <afile> <afile>:r !gzip <afile>:r gzip FileAppendPre *.gz !gunzip <afile> !mv <afile>:r <afile> gzip FileReadPost *.gz let ch_save = &ch|set ch=2 '[,']!gunzip set nobin let &ch = ch_save|unlet ch_save execute ":doautocmd BufReadPost " . expand("%:r") gzip FileReadPre *.gz set bin gzip FileWritePost *.gz !mv <afile> <afile>:r !gzip <afile>:r
All .gz related stuff, nothing to do at all with .bz2 and .Z. At this point, I realized that after the commands in gzip.vim were being loaded, they were being overridden by the above somewhere.
I checked the global vimrc file, which is in /usr/share/vim/vim61/macros and I hit bull's eye. In that file, the gzip command was getting overridden with the stuff shown above. So, I just deleted the gzip autocommands in the global vimrc file and everything is working fine now. All the three supported files (.gz, .Z, ,bz2) are opening properly.
[Thomas] This incident was also reported on the Vim mailing list, but I was too slow on the uptake to mention it at the time.