Broken things, fixed |
Content is copyright © 2002-3 Matthew Astley $Id: index.html,v 1.15 2006/04/16 02:12:53 mca1001 Exp $ |
This page is dedicated to those who bother to use a search engine to find answers to their questions.
Naturally caveats will apply, including but not limited toAnyway hopefully this will be of some use to someone, even if only as a starting point for debugging.
...later... satisfied visitors so far: one, that I know of. Thanks for dropping me a line, Zak.
Sometimes you can get a little wobble if you put a small power supply (PSU) under or near the monitor, if they aren't magnetically shielded. Just move it away.
This was bigger, and there was nothing obvious in range. My diagnostic tool was a fairly coarse open solenoid wound on a ferrite rod, plus an oscilloscope. It's quite likely that a crystal earpiece or mic input to a tape recorder would have been adequate instead of a 'scope.
In this case I found the 50Hz (mains frequency) field emanating from the cable trunking, specifically the earth bond wire for an outside water pipe. This metal pipe came out of the ground inside the building and went out through the wall to a tap. It was earth bonded internally - this is normal practice for internal metalwork, although potentially dangerous to people outside the office in the event of an earth fault.
A current clamp (non-contact ammeter) registered up to 10 amps in this earth bond. This unbalanced current was causing the huge field and disrupting the picture on the CRT. The current was flowing because the metal pipe served as an excellent earth and return path to the substation, so the current that normally just takes the neutral return was sharing this route back home.
Generally you can't just remove earth bonds. Talk to a qualified electrician. What we did was cut the water supply permanently (it was mostly used for washing cars IIRC), remove the external tap and cringe the pipe stub back into the concrete floor. Since no metal was exposed we could remove the earth bond, and then the picture was as steady as any other.
There is a problem with your certificate database [Error Code: -8174]Long running problem from about May 2002, fixed 2002-10-29. Other reports of the same problem,
This is caused by the setting in Edit > Preferences... > Privacy & Security > Validation > OCSP > "Use OCSP to validate only certificates that specify an OCSP service URL" . Changing it back to Do not use OCSP for certificate validation makes the problem go away.
Of course, finding this out took quite a while because my method was to brainwash mozilla (move the configuration directory away and let it make another), then move things back until it broke again.
I haven't bothered to investigate what the problem is exactly. It's probably fixed in the next release anyway...
This is another long standing problem (since early 2002), and something to do with the history sidebar. Certainly if I view a "folder" of history items such as "4 days ago" I get sent to find.com. It used to redirect to a holding page, but now apparently someone has bought it. Anyway, I didn't want to go there...
At some point the behaviour changed, so that clicking a link would sometimes take me to find.com. I grepped the .mozilla settings directory for 'find.com' and removed some apparently pointless entries (I forget which, sorry). Half of the problem solved, anyway.
I wanted to try the linux-wlan-ng under Woody because I don't really want to run the testing release on this laptop. Unfortunately I've come to this rather late in the day (2002-11-5).
Normally building a testing package from source under the current stable release doesn't seem to cause any problems, but this time it turned out to be more 'fun' than I was expecting. I blundered through, fetching bits I needed and trying to compile/install them, until I found a route that worked. It may not be a correct route, but I have installable packages which appear to contain the relevant files...
cd linux-wlan-ng-0.1.15 fakeroot debian/rules binary fakeroot debian/rules KSRC=~/compile/linux-2.4.19 KDREV=5 KVERS=2.4.19-relapse PSRC=/usr/src/modules/pcmcia-cs binaryThe first command builds the utilities (hmm, I think this may be possible without the new debhelper .. but I needed the modules anyway). The second builds modules and requires the kernel source and other stuff to be present. The extra make variables are:
time make-kpkg --rootcmd fakeroot --config menuconfig --revision 5 --append-to-version -relapse kernel_image, so the modules live in /lib/modules/2.4.19-relapse.
I realised later that using apt-src might have made some of this easier. Never mind.
(The version number may not be relevant at this point [2002-01-09], because future versions are likely to continue to have the problem, and if it gets fixed the fix may be backported quite easily.)
When trying to make a socket (tcp or unix domain) connection to an X11 server, you get an error like this,_X11TransSocketOpen: socket() failed for tcp _X11TransSocketOpenCOTSClient: Unable to open socket for tcp _X11TransOpen: transport open failed for tcp/localhost:10You faff about for a bit trying various things with Xauthority files and DISPLAY environment variables, then you give up and try strace. After various libraries are opened and presumably loaded,
JDK-1.3.1_06_SUN/jre/lib/i386/libawt.so JDK-1.3.1_06_SUN/jre/lib/i386/libmlib_image.so /usr/X11R6/lib/libXp.so.6 /usr/X11R6/lib/libXt.so.6 /usr/X11R6/lib/libXext.so.6 /usr/X11R6/lib/libXtst.so.6 /usr/X11R6/lib/libX11.so.6 /usr/X11R6/lib/libSM.so.6 /usr/X11R6/lib/libICE.so.6 JDK-1.3.1_06_SUN/jre/lib/i386/libfontmanager.so JDK-1.3.1_06_SUN/jre/lib/i386/libawt.solibX11 (as it turns out) tries to make a socket and connect it to the specifed X server. The socket() call works normally, but then errors are printed,
socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 456 write(2, "_X11Trans", 9) = 9 write(2, "SocketOpen: socket() failed for "..., 36) = 36 write(2, "_X11Trans", 9) = 9 write(2, "SocketOpenCOTSClient: Unable to "..., 52) = 52 write(2, "_X11Trans", 9) = 9 write(2, "Open: transport open failed for "..., 49) = 49
Turns out there's a limit to the number of file descriptors you can have open already, at the point you try to make a connection to your X server. This isn't on all systems, but apparently on some flavours of Linux at least [haven't decoded the #if conditions yet either].
The current work-around is to not have more than 255 file descriptors open at the time you try to connect to the X server, whether this is by tcp or unix domain socket.
You're probably only going to bump into this if you're running Java, I think. Only in Java would you need to open so very many files at once, and still need to connect to X afterwards (for example, to render a graph in your servlet).
This is from memory, and it's a summary of the method I eventually settled on. There were many dead end, but I believe the significant steps were something like
You can see the patch I suggested to help clarify the error. I don't want to get involved in messing with OPEN_MAX and its relative TRANS_OPEN_MAX because I don't understand why they're limited to 256 under GNU, when the limit is obviously higher. Maybe later...
For now, the work around is just to ensure that AWT (in the case of Java) is initialised before loads of other things are opened. It's possible that this could be kludge by opening something pointless early on, and then closing it before requesting the X connection. This is likely to be a fragile solution though!
The main lesson to be learned is that running grep is a quick and easy way to get the computer to do your work for you. No idea where the error came from? Search the whole of filesystem for a unique-looking string and see what comes back. Bonus points for searching a small enough subset of the disk that it mostly fits in RAM - performance on the next grep pass with different arguments will be greatly improved.
Hopefully this will serve someone as a crash course in how to cheat when debugging things, and serve others as an example of why error messages should always be verbose. Especially when something unusual or unpredictable might have happened. 8-)
scanner.c: read_scanner(0): funky result:-75. Consult Documentation/usb/scanner.txt.Lots of those in the syslog/console/dmesg, when the driver tries to talk to the scanner.
RTFM again, slap self and switch over to using the Plustek driver. The Epson driver doesn't like the 1260 because it uses a different chipset. I had forgotten about this.
After changing the configuration, we get a "Segmentation Fault". Some relevant-looking fragments of the diagnostics were
export SANE_DEBUG_DLL=12 export SANE_DEBUG_PLUSTEK=12 ... [sanei_debug] Setting debug level of dll to 12. [dll] sane_init: SANE dll backend version 1.0.8 from sane-backends 1.0.11 ... [plustek] Plustek backend V0.45-4, part of sane-backends 1.0.11 ... [plustek] drvopen() [plustek] usbDev_open(auto,0x04B8-0x011D) [plustek] Found device at >/dev/usb/scanner0< [plustek] Vendor ID=0x04B8, Product ID=0x011D [plustek] usbio_DetectLM983x
This appears to have been caused by the earlier use of the Epson driver; after a reboot we can't reproduce the segfault.
This was a while ago now though, so I apologise for the slightly rough program output and lack of detail.
So, Matthew gets his come-uppance for not reading the docs properly, and then more trouble for failing to explain that scanners eat disk space very quickly.
This happened to me when I upgraded from 2.2.20 to 2.4.21-pre7 [2003-04]. /dev/input/mice worked fine until I upgraded. When I ran menuconfig for 2.4.21-pre7, there seemed to be a lot of options that needed changing, so I merrily went and changed them.
I'm not sure when in kernel history the CONFIG_USB_HIDINPUT option appeared under CONFIG_USB_HID (hid.c) in "USB support", but if you don't enable it then the mouse event don't get passed down the chain to input.o (input.c) and mousedev.o (mousedev.c).
Of course this is obvious when you read the option carefully. On the other hand if you're not sure what you changed when you upgraded stuff, it could be tricky to find.
I recently [2004-02] recording a public talk for a friend who couldn't attend. The resulting sound quality is probably adequate, but it could have been much better.
Many of these hints will apply in other situations, but I would imagine that larger laptops won't suffer quite so badly in the "noise" department. It seems likely that electrical screening between the laptop's sound systems and data busses was kept to a minimum to avoid wasting space inside the machine.
I used a fairly cheap desktop microphone from Maplin, connected to my laptop in the usual way. It seems to be a lot less directional that the packet would have you believe, but I did forgot that people's voices are quite directional!
Putting the laptop back in the rucksack was probably a good idea. The hard disc is very noisy, and of course it must run for the duration.
Putting the microphone on the table behind the speaker was a mistake. On a chair in the front row of the audience might have been better, or perhaps on the floor at the front, although either of these positions would probably pick up a lot of noise from the audience, the wooden floor and the chairs moving about.
I did a quick test of sound quality before the talk began, but I used the tiny built in speakers for this test. It was enough to check that I had a signal, but not enough to check the quality. Next time I will take my headphones.
Because the mic signal was fairly weak, the electical noise inside the laptop swamped the signal. The noise consists of crackly white noise and the sound of the CPU chopping on and off.
Having taken a two hour sound sample I needed to cut it up into tracks according to the logical flow of the talk and edit out a few very loud noises and pauses. I found the Sweep sound editor more than adequate for this, but I was glad I had recently thrown a gigabyte of RAM into this machine!
After doing the slice and dice with the main sample, I chopped out lots of snatches of "silence" (i.e. just machine noise) and stuck them together. Then I filtered the lot through dnoise (part of csound) - it's magical! The noise just vanishes! It does leave some artifacts, but you can tweak around to reduce those. Here are my Makefile rules:
%.cdr: %.wav sox $< $@ %.wav: %.aiff nz.wav csound -U dnoise -S2 -m-40 -n5 -N 4096 -t1 -W -i nz.wav -o $@ $<
(yeah, watch the tabs). I don't remember why I saved AIFF format from Sweep, but it was convenient enough to use wav as an intermediate for generating MP3s and CD-R ready files.
I tried an assortment of CD labelling programs from the Debian distribution and found disc-cover was my favourite.
This is probably covered somewhere else, but just in case...
I think these files are generated by Windows NT onwards when you put accented characters in a Notepad file and save it. The simple text part may look right if you cat it to a unix terminal, but if you view it with less it will probably show something like
<FF><FE> ^@S^@t^@a^@r^@t^@ ^@o^@f^@ ^@R^@e^@p^@o^@r^@t^@ ^@^M^@The file(1) program reports them as
Little-endian UTF-16 Unicode English character data, with CRLF line terminators
You can load them into emacs with the sequence C-x <RET> c utf-16-le-dos <RET> C-x C-f filename <RET>, which means to do the find-file command in this Windows friendly Unicode mode.
Hmm, there's probably a way to use this to save out a utf-16-le file, but I'm getting the error
write-region: Symbol's function definition is void: utf-16-le-pre-write-conversionNever mind.
video=neofb:picturebook
patch for 2.6.16.5[2006-04-16] This is just a trivial update of somebody else's patch which I found via this blog. It didn't apply cleanly to 2.6.16.5 so I did the merge and diffed again. See also /etc/fb.modes info.