Dead Inkjet

While I’m on a roll, I thought I’d complete the dead hardware trifecta, and whine about my dead inkjet printer. My Epson 1520 died in the stupidest possible way. The printer mechanism works fine, but the safety interlock is broken. The interlock is designed to shut down the printer when you open the lid, so you don’t injure your fingers by sticking them into the mechanism while it’s printing. Every time I turn on the printer and it complains the lid is open, and refuses to print.

I suppose I could disassemble the printer and bypass the interlock, but I think I’ll just toss it in the trash. The 1520 is a wonderful printer, it’s the last 11×17 CMYK printer Epson made, so it’s perfect for prepress proofs. The new generation of 6 and 8 color printers are way too good for prepress proofs, they don’t produce realistic CMYK proofs, they’re too saturated and bright.

But there is no sense in beating a dead horse. Epson no longer makes drivers for this printer, you have to use CUPS, which is included free in MacOS X, but it isn’t very color accurate. Time to send the old beast to the graveyard.

Fortunately my ancient HP Laserjet 5MP is still going strong. I don’t even remember when I bought that printer, I think it dates back to the 1980s.

Dead CRT

My beautiful Sony sf300 20 inch CRT just died. This isn’t too surprising, the CRT was acting cranky but I was hoping it would last a few more months before it died. This was the worst possible time for a sudden breakdown. I’m right in the middle of a project, so I had to run out and buy a replacement monitor. Saturday night at 8PM is not a good time to shop for a new monitor. The only place that was open was Best Buy, and I’ve never ever been satisfied with any product I bought at Best Buy. But I really didn’t have much choice, so I bought a cheap 17 inch Samsung LCD.

I hate LCDs, I prefer a CRT for critical color work. Back when I bought the sf300, it was extremely expensive, a top-end monitor designed for color calibrated work. Its color was always very accurate, even up to the moment it died. I think I bought the sf300 around 1993 so I suppose it had a good, long life.

The Samsung LCDs are supposedly the same LCDs used in Apple Cinema Displays, but I’ve seen the big 30 inch Cinema display and the text is a hell of a lot clearer than this Samsung. You get what you pay for. And this is what is most disappointing, I had to spend money I was reserving for a new system. I’ve been thinking of buying a new PowerMac Quad G5 and a 30 inch Cinema Display, but I wanted to wait a couple more months. I had a great scheme, I can register as an Apple Developer for $500, and buy a quad G5 and a 30 inch display for a huge discount, I think I recall pricing out systems with discounts as high as $1800. So an Apple Developer registration really pays off if you plan on buying a high-end system, you spend $500 and get back $1800.

But this was an emergency, I wanted to get back up and running fast. I was prepared to buy just the 30 inch Cinema Display, even without the Developer discount, so I called the nearest Apple Store. If they had a video card capable of running the Cinema Display on my old MDD dual-1Ghz G4, I would have bought it and picked it up in the morning. But there is only one video card that can do the job, and they didn’t have it. That Radeon 9800 Pro card costs $250, almost as much as this cheap Samsung LCD display. The fastest I could get a 9800 card was Tuesday, by mail order. Oh well, so much for that idea.

I was hoping I’d get over the next few weeks and then splurge on a new system, I figured I should buy one last PowerPC system to get me past the Mac Intel transition. I was hoping to move out of Iowa and buy the system once I got to a new residence, to avoid having to move more hardware, but now I’m not sure what to do. I don’t really want to buy a $250 video card for an old machine, when that’s almost 10% of the price of a new machine. So Monday, I guess I’ll call up Apple and become an official developer, and get a new system, and then I can return this piece of crap LCD to Best Buy.

Computers Never Make Mistakes

Back in the early 1970s when I just started learning computer programming, using a computer was a much different process than we use today. First you’d write down your program on paper, then you’d type your program onto punched cards, then you’d hand the deck of cards to the computer operator. In a few hours, you could pick up a printout of your program’s results. Quite often your program wouldn’t work and all you’d get was a couple pages of incomprehensible error messages. To learn what the error messages meant, you’d go to the desk at the back of the keypunch room, where a 60 foot long rack contained all the documentation for the IBM 360 computer series. Imagine a room with an entire 60 foot long wall lined with tables, and on top, 60 feet of documentation in racks placed side to side.





Trying to find useful, relevant data in this rack was like trying to find a single page in a book 60 feet thick. In fact, that’s exactly what it was, and even worse, there were dozens of indexes spread throughout the 60 linear feet of documents, one index could send you to another index, which then referred to specific pages, which might then refer you to updates or errata inserted erratically throughout the rack. When the room was busy, there would often be several people reading different sections of the rack, taking notes, then moving to a different section, taking more notes, etc. Some sections of the rack were more useful than others, and it was common to see people standing in line behind someone, waiting to use that section of the rack.

There were only a few people who knew the entirety of the documentation, a few Comp Sci grad students who had to maintain the racks by inserting the monthly updates and errata. It must have been extremely tedious to insert updated pages throughout the 60 foot rack, but in the process, they learned where all the useful information was.

These same grad students also worked in the “debug room,” which was a small office where you could ask for help interpreting your program errors. People would line up in the hall outside the office, waiting to seek advice from “the debugger.” The debugger had a short rack on his desk containing a master index of the big 60 foot documentation rack. He would look at your program printout, and if the problem was not obvious, he’d look through the index, and refer you back to a specific document in the big rack. Then you’d go back and read some more documentation, figure out what went wrong, then you’d punch a few cards to correct your program error, search through your card deck to switch a few cards, and resubmit your program. And the cycle would start all over again.

The one thing I remember most vividly about the debug room was a big sign hanging on the wall, it was the first thing you’d see upon entering the room. The sign was written on a computer pen plotter, in an oddly machine-like character set, it said:


Computers never make mistakes. All “computer errors” are human errors.

Even today, this is the hardest thing for computer users to understand. If a computer does not give you the results you expected, it is because you gave it bad instructions. Computers follow your instructions faithfully, and will accurately produce the incorrect answer that you incorrectly specified. In those olden days, computers were not so fault-tolerant, if your program had errors, it would stop and produce nothing but an error message. But modern computer programs anticipate that their users might be idiots, and are designed to gracefully handle even the most stupid, nonsensical requests. I suspect this is a very bad thing. It allows people to get results even if they are imprecise. I think it would be better to be strict, returning no results in response to vague inputs.

At the risk of offending a dear friend, I will use him as a case in point. I have a friend who often asks me for technical support, but his phone calls sometimes take hours, primarily due to his vague descriptions of his problems. He’ll phone me up and say things like “I’m trying to print, but I press the whatchamacallit and nothing happens.” No, I’m not using the word “whatchamacallit” as an euphemism, he really does say “whatchamacallit.” When I object to his vague descriptions, he says I’m supposed to anticipate what he is doing because I know the programs so well. This is precisely NOT how to get good help. If I don’t know precisely what you’re doing wrong, how can I tell you how to do it right?

To use a computer and get good results, you must operate it with precision. But first, you must think with precision. This is no different than any other complex task in life. Human beings are not used to thinking with precision. This is why it is easier to fix computers than to assist users in operating them. Computers always give you a precise report on what they are doing. Users often don’t know what they are doing.

After decades of providing tech support to thousands of computer users, I made an observation that I have formulated as a new law, I call it “The Law of Infinite Stupidity.” It states:


There are a finite number of ways to do something right. But there are an infinite number of ways to do something wrong.

A Response to Kevin Marks’ Anti-DRM Argument

Kevin Marks recently posted an argument against Digital Rights Management on his weblog and apparently has submitted it to a working group in the British House of Parliament. When I read his argument, I was astounded. The entire argument is founded on an error, a miscomprehension of a fundamental theorem of Computer Science.

I could summarize Marks’ statement into two basic arguments:

1. DRM is futile, it can always be broken.

2. DRM is a perversion of justice.

Marks opens his argument with a huge misstatement of facts:

Firstly, the Church-Turing thesis, one of the basic tenets of Computer Science, which states that any general purpose computing device can solve the same problems as any other. The practical consequences of this are key – it means that a computer can emulate any other computer, so a program has no way of knowing what it is really running on. This is not theory, but something we all use every day, whether it is Java virtual machines, or Pentiums emulating older processors for software compatibility.

How does this apply to DRM? It means that any protection can be removed. For a concrete example, consider MAME – the Multi Arcade Machine Emulator – which will run almost any video game from the last 30 years. It’s hard to imagine a more complete DRM solution than custom hardware with a coin slot on the front, yet in MAME you just have to press the 5 key to tell it you have paid.

Unfortunately, Marks has completely misstated the Church-Turing Thesis. It is a general misconception that the Church-Turing Thesis states that any computer program can be emulated by any other computer. This fallacy has come to be known as “The Turing Myth.” This is a rather abstract matter, there is a short mathematical paper (PDF file) that fully debunks the misstatement Marks uses as the fundamental basis of his argument.

To cut to the core of The Turing Myth, there has come to be a widespread misunderstanding that The Turing Thesis means that any sufficiently powerful computer can emulate any other computer. The Turing Thesis is much narrower, in brief, it states that any computable algorithm can be executed by a Turing Machine. This in no way implies that any computer can emulate any other computer. Perhaps Turing inadvertently started this misunderstanding by a bad choice of nomenclature; he labeled his hypothetical computer a “Universal Machine,” which we now call a “Turing Machine.” However, a Turing Machine is not a universal device except in regards to a limited spectrum of computing functions.

One joker restated the Turing Thesis as “a computer is defined as a device that can run computer programs.” This may seem obvious now, but in Turing’s day, computers were in their infancy and the applications (and limitations) of computers were not obvious. As one example of these limits, there is a widespread category of “incomputable algorithms” that cannot be computed by any computer, let alone a Turing Machine. For example, a computer cannot algorithmically produce a true random number, it can only calculate pseudo-random numbers. This fundamental application of The Turing Thesis has founded a whole field of quantum cryptography, encoding methods based on incomputable physical processes, such as random decay of atomic particles. Quantum cryptographic DRM would be unbreakable, no matter how much computer power could be applied to breaking it.

I contacted Marks to inform him of the Turing Myth, in the hopes that he might amend his argument, since it all springs forth from a fallacy. He responded briefly by emphasizing the case of emulating MAME, and cited Moore’s Law. Apparently Marks is arguing that since computers are always increasing in power, any modern computer can break older DRM systems that are based on simpler computers. He also appears to argue that emulated computers can simulate the output device, and incorporate a device to convert it on the fly to an unencrypted format, for recording.

Unfortunately, Marks chose a terrible example. The original game systems that are emulated by MAME had no DRM whatsoever. It was inconceivable to the game manufacturers that anyone would go to the trouble and expense to reverse-engineer their devices. The code inside these game systems was designed to run on a specific hardware set, any identical hardware set (or emulated hardware set) could run the unprotected code. At best, these devices used “security by obscurity,” which any computer scientist will tell you is no security whatsoever.

Ultimately, DRM systems must not be so cumbersome as to be a nuisance to the intended user. This has lead to a variety of weaker DRM systems that were easily broken, for example, the CSS encryption in DVDs. However, this is no proof that truly unbreakable DRM is impossible or unworkable. As computer power and mathematical research advances, truly unbreakable DRM will become widespread.

Having dispensed with Marks’ first premise, let us move on to the second, that DRM is a “perversion of justice.” I cannot speak to British Law, as does Marks, however it seems to me that his arguments invoke the aura of British heroes like Turing and Queen Anne, to pander to unsophisticated British Parliamentarians. While his remarks are addressed to Parliament, he has attempted to argue from “mathematical truth” that DRM is futile. I would have expected that his legal argument would have attempted to base itself on more universal international copyright agreements, such as the Berne Convention. But I will not quibble over the scope of the argument, and instead attempt to deal with the argument itself. Marks states:


The second principle is the core one of jurisprudence – that due process is a requirement before punishment. I know the Prime Minister has defended devolving summary justice to police constables, but the DRM proponents want to devolve it to computers. The fine details of copyright law have been debated and redefined for centuries, yet the DRM advocates assert that the same computers you wouldn’t trust to check your grammar can somehow substitute for the entire legal system in determining and enforcing copyright law.

It appears that Marks’ fundamental complaint with DRM is that it puts restrictions in place that prevents infringement before it occurs. Current copyright laws only allow the valid copyright-holders to sue for damages after infringement occurs. Marks asserts this prior restraint is a violation of due process. However, he is mistaken, the DRM end-user has already waived his rights. When a user purchases a product with DRM, he is entering into a private contract with the seller, he explicitly accepts these restraints. If the user does not wish to subject himself to these restrictions, he merely needs to reject the product and not purchase it, and not enter into that contract with the seller.

I can find no legal basis that would prohibit the use of prior restraint in private contracts. It would seem to me that this would be a common occurence. For example, I might sign a Nondisclosure Agreement when dealing with a private company, agreeing that I would not disclose their secrets. A company might even distribute encrypted private documents to NDA signatories.

Ultimately, Marks’ arguments do not hold up to scrutiny. They are based on false premises, and thus cannot lead to valid conclusions. Let me close by following Marks’ answers to the questions posed by Parliament:

Whether DRM distorts traditional tradeoffs in copyright law. I submit that it does not. It merely changes the timing of the protection afforded by copyright law. It merely prevents infringement before it occurs, rather than forcing the copyright-holder to pursue legal remedies after the infringement occurs.

Whether new types of content sharing license (such as Creative Commons or Copyleft) need legislation changes to be effective. Current copyright laws are effective in protecting individual artists as well as corporate interests. Amendments to private distribution contracts such as CC or Copyleft are unproven in court. There is no compelling reason to change current copyright laws.

How copyright deposit libraries should deal with DRM issues. Since all DRM-encumbered materials originated as unprotected source material, it is up to the owner to archive this material as they see fit. Certainly the creators and owners have no reason to lock up all existing versions of their source material, this would impede any future repurposing of their content. Since a public archive of copyrighted material has no impact on the continued existence of original source material, it is up to the libraries to establish their own methods for preservation of DRM playback systems.

How consumers should be protected when DRM systems are discontinued. How were consumers protected when non-DRM systems were discontinued? They were not. I cannot play back Edison Cylinder recordings with modern equipment, yet I could continue to play them back on original Edison Phonographs. Vendors can not be required to insure their formats continue forever, this would stifle innovation.

To what extent DRM systems should be forced to make exceptions for the partially sighted and people with other disabilities. Disabilities are as varied as the multitude of people who have them, no DRM system could possibly accommodate all disabled persons. Some accomodations make no sense, for example, an exhibit of paintings or photography will always be inaccessible to the blind. “Accessibility” is a slippery slope, there will always be someone who complains they need further exceptions. Forcing owners to provide exceptions for disabilities will only lead to increasingly costly demands for accommodations upon content providers, which would stifle their ability to provide products for mass audiences.

What legal protections DRM systems should have from those who wish to circumvent them. DRM systems should be afforded protections available under whatever private contracts they license their work, just as the law exists today. End-users who are entitled to Fair Use already have the ability to request source material from the owners.

Whether DRM systems can have unintended consequences on computer functionality. This is a design issue, not a legal or political issue. Nobody can doubt that any computer program can have unintended consequences.

The role of the UK Parliament… I abstain. Parliament is not my bailiwick.


In summary, I believe that Marks’ argument is based on two fallacies, and that his conclusions are based on a political wish, not a legal or technical argument. DRM is a compromise, some people (even me) may consider it a poor compromise, but I cannot see any technical or legal reason to burden content providers with even more ill-conceived compromises.

A Really Great Bad Day

A few days ago, I got a frantic email message from a programmer and blogger I know. She mass-mailed her whole circle of friends with a desperate message, something like “The crossbeam’s gone out of skew on the treadle, and it will take several days of work to get the server back working again! I won’t be able to answer email until I get this solved! I am having a REALLY BAD DAY!”

You have got to be kidding me, a bad day? Surely this is the sort of technical challenge that some people live for. It is all a matter of perspective. This is when you get to show your true mettle, solving a technical problem that few other people could understand, let alone solve. It should be a great day!

QWest Sucks More Than Ever

QWest, as the local telecommunications monopoly, has given me so much grief over such a long time, it is hard for me to remember a time when QWest didn’t suck. But today, QWest actually delivered something I spent months begging them to deliver, but was refused. And it clearly shows that QWest sucks more than they ever sucked before.

If you have followed my blog in recent months, you know that I moved my office, and when I arrived at the new location, I discovered that QWest DSL was not available. It never even occurred to me to check availability before moving, since DSL is deployed throughout this entire city.. except in THIS neighborhood.

I did a lot of research, pushing my complaints up through middle levels of QWest management, and was consistently told that DSL would not be available, they COULD provide it, but would NOT provide it. Reports from QWest’s DSL technicians indicated there was a new DSLAM installed a mere 8 blocks from my home, but QWest would not connect any users to it. The service was available, DSL techs were ready to install it, but management would not permit anyone to purchase the service and connect to the new DSLAM. The last manager I spoke to at QWest was almost psychotically rude, she took special pains to be as abusive as possible to me, despite my attempts to be as polite as possible (after all, I was begging them for service).

So today, this morning at 9AM, as I was just getting up and making coffee, a bit fuzzy after a late night of work, I get a phone call. Oh joy, it’s a QWest telemarketer asking if they can look at my account and see if they can find any way to “serve me better.” I bite my tongue to suppress the urge to blurt out all the DSL backstory, and tell the guy, “Look, I never use this land line, I use my cell phone for all my calling, I’m thinking of disconnecting it entirely. The only thing you could do for me is to hook me up with DSL.” The guy says he’ll check availability, I told him don’t bother, I just spent 4 months trying to get DSL and QWest always told me no. He looks it up in his computer, surprise surprise, DSL is available in my location! I refused to believe it, so I went to my computer, looked it up on their website, yes it is available!

Now instead of being happy I can get DSL again, I am absolutely infuriated. It proved that QWest could have hooked me up 4 months ago, but they refused for no reason whatsoever. I’m probably going to move out of here within 2 or 3 months, so I will only be using this service for a short time, when I could have been using it all along.

Now that I am scheduled for DSL installation on December 5, I will be able to restore BlogTV service. I needed DSL with high upstream bandwidth and 2 static IPs, in order to deliver video from my QuickTime Streaming Server. This was impossible with my current cable modem connection. But it is not worth it to restore service if I’m just going to move in a couple of months and go through this all over again. It is time for me to move this blog and the QTSS server to a professional hosting service.

If there is one thing I learned doing years of customer service, the worst thing you can ever do is screw your customers in a way that makes your them look bad in front of THEIR customers. And that is exactly what QWest has done to me. They refused to deliver DSL when they could have, causing my archives of video stories to stop working. Years of my work were taken down, making me look bad. I recently noticed a couple of articles about video blogging cited my website, even after the video server went down. But when readers clicked on the links to my videos, they got nothing but an error message, I looked like an idiot, and the article writers looked like an idiot too, for citing a dead link. This is the sort of thing that makes me totally dispirited about publishing ANYTHING.

But no more. I am now determined to be totally free from QWest and their incompetence. I will be using QWest for my home connectivity, but only for the short term. I am determined to move this blog to a bulletproof hosting service NOT through QWest. And I am determined to resume writing and posting videos as often as possible once the transition is complete. I have several long videos recorded and ready to post, as soon as I can get the video server up and online. But it’s going to take a bit longer, if I’m going to do this right. So bear with me, this transition may be a little rough, but this blog will be better than ever. I promise.

Thermodynamics

Sometimes you can change a bit and suddenly everything is different. I literally changed one binary bit, I poked one button on my computer, and years of misery ended instantly.

If you are one of the few people who have ever been in my office, you probably remember one thing in particular: it is unbearably HOT. Computers kick out a lot of waste heat, my PowerMac is especially hot. This particular model is known by the nickname “wind tunnel,” it is notorious for the noise of the high powered fans it uses to vent all the heat. And all the heat goes out into my tiny office.

I recently moved into a new apartment, and relocated all my computer equipment into my new office in the second bedroom. When I put the utilities account in my name, the company said the August bill for last year was only $50, but when I got my first bill, it was $150! Either the previous tenant was exceptionally frugal and never ran the air conditioning, or else my computers were using a lot more power than I ever suspected.

I did a bit of research, discussed the problem with a few people, and the general opinion was that the computers didn’t really consume that much electricity, the big energy cost was the extra air conditioning to cool the excess heat the computers generate.

In the course of this discussion, someone suggested I look at an old software hack for my machine, called CHUD. It is an old Apple developer utility that adds “processor nap mode,” it sleeps the processor between cycles during times of low CPU demand, reducing power consumption and waste heat output. I installed it and miraculously the chip temperature dropped by nearly 30 degrees Centigrade, and the exhaust heat dropped to tolerable levels. I just poked the button and suddenly my office was cool again! The air conditioning stopped running all the time, I haven’t received my latest utility bill yet but I expect it to be considerably lower.

I rarely reboot my computer, so about a week later when I installed some new software and restarted, I didn’t think anything about it. But about an hour later, I felt like I had a fever, I was burning up. At first I thought I caught a cold or flu, but then I checked the computer’s temperature sensors and discovered it was running hot again. Nap Mode isn’t persistent across reboots, you have to poke the button after every reboot. That’s not such a big deal since I usually run for weeks and even months without rebooting.

But alas, this story has a sudden surprise ending, unexpected even by me. As I was writing this story, in the background I was installing the latest update to MacOS X, version 1.4.3. Unfortunately, Nap Mode is now disabled, and my office is getting hot again. I am trying to get CHUD to work again, but it appears to be impossible. Oh well, it was nice while it lasted.

Update Nov 1, 2005: I got it working, and my office is cool again. A few years ago, I wrote a new variant of Murphy’s Law I call “The Idiot’s Law,” and this is a perfect example. I asked for help with CHUD from the Accelerate Your Mac website. They published my plea for help, and then suddenly Nap Mode spontaneously started working again. The Idiot’s Law: Whenever you ask for tech help in a public forum, your problem suddenly resolves itself in a way that makes you look like an idiot.

MovableType 3.2 Upgrade Completed

I just upgraded the server to MovableType 3.2. It wasn’t quite as painless as some upgraders have claimed. I had one little glitch, but that’s mostly because my configuration is rather old and crufty, it goes back to the early days of MT, I think even before version 2. Oh well, everything should be running OK now (with the exception of the video server, which is still down).

I am considering moving this blog to a ISP-hosted server, so I can use their video server. I am just waiting for a response from their tech support, to see if I can point my current domain to their server. I think it might be time for me to buy a real domain name for this blog, but I don’t want to break any old links, so I’m trying to set up a clever way to forward all the old links to any new domain I register. I know how to do it, I just don’t know if the ISP will allow me to do it.

Oh Crap, No DSL!

I just rented a new apartment, and while arranging the utilities hookups, I discovered that it’s not certified for DSL availability! I wouldn’t have rented this apartment if I had known I couldn’t get DSL, I guess I should have checked it out in advance. But it never occurred to me that such a common service would be unavailable in the middle of a metropolitan area.

QWest says the apartment might be OK for DSL, but they won’t know for sure until they send a lineman to hook up the phones and check the lines. So I might be stuck with a cable modem, which could make it impossible to run this server properly. I might have to migrate the server contents to a professional hosting service, which would not be cheap since there aren’t many affordable hosts for QuickTime Streaming Server, one of the key features of this site.

I expect the server to go offline temporarily within the next week, while I move the CPU to my new apartment. There is a possibility that this server may be offline longer than expected, or resume service with some high-bandwidth features disabled. Stay tuned for more developments.

Update August 2, 2005: QWest officially says DSL is not available at my location because I am too far from their switching facility, so they did not even bother to send a lineman to test my lines. However, there are multiple reports from QWest customers on the same block as my apartment that DO have DSL. There is even one report that QWest installed a new DSLAM only 8 blocks from my apartment, so I am definitely not too far away to get service. Everyone says QWest DSL is available, except QWest. I’m still trying to get QWest to recognize that they built new facilities specifically to expand service in my area, but they just don’t believe me. The problem is, the QWest offices are in Seattle and Denver, they know nothing about the local network here in Iowa.

Update August 3, 2005: I called QWest DSL tech support under my old account, to see if they could do anything for me. The techs said they can look up my new location in their “circuit database” and it shows my apartment is qualified for DSL, and there is an available “pair” (copper wires) ready to install the service, IF I can get a QWest lineman to go out and check the line quality and give the approval. And they’re sure it would work, IF we can just get the “line conditioning” work done. But he also says that the QWest sales database will NOT show the location as OK for DSL, so they won’t even send out a lineman to check it out, and on top of that, QWest Sales says they won’t do line conditioning anymore. I am at an impasse. QWest CAN sell me DSL, but they WON’T. This is ridiculous.

iPod Requires Native USB2

I solved a minor oddity with my iPod Mini. Every time I put my Mac to sleep with the iPod attached via USB2, when I wake the machine, I get an error message that one of the drives (the iPod obviously) was not disconnected properly. This actually corrupted the iPod disk once, but that’s not such a big deal, just reformat, reload, and it’s back in operation.

I finally figured out that iPods require a native USB2 port. My PowerMac MDD dual-1Ghz only has USB1 so I added an Adaptec USB2 card. Unfortunately, that isn’t good enough, you must have a USB2 port that is built in to the machine, an aftermarket USB2 card won’t work. The iPod Mini only comes with a USB2 cable, so I bought the inexpensive iPod Firewire cable, not the expensive dual FireWire/USB2 cable, just the plain old FireWire cable.

Now everything works fine. When my Mac goes to sleep, the iPod automatically disconnects, and reconnects when the CPU wakes from sleep. I must have read this somewhere, I don’t recall where, it’s pretty obscure, so I figured I’d post it so if someone is Googling for info, they could find it out easily.

© Copyright 2016 Charles Eicher