March 2009 Archives

it looks like I forgot to set dom0-min-mem to something reasonable in /etc/xen/xend-config.sxp. Yes, that was the problem.
sshd: page allocation failure. order:0, mode:0x20

Call Trace:
  [] __alloc_pages+0x299/0x2b2
 [] cache_alloc_refill+0x2b2/0x535
 [] __kmalloc+0xb0/0xf0
 [] pskb_expand_head+0x51/0x137
 [] :bridge:br_dev_queue_push_xmit+0x13e/0x1ad
 [] :bridge:br_nf_post_routing+0x17a/0x195
 [] nf_iterate+0x41/0x7d
 [] :bridge:br_dev_queue_push_xmit+0x0/0x1ad
 [] nf_hook_slow+0x58/0xbc
 [] :bridge:br_dev_queue_push_xmit+0x0/0x1ad
 [] :bridge:br_forward_finish+0x3f/0x51
 [] :bridge:br_nf_forward_finish+0xf7/0xff
 [] :bridge:br_nf_forward_ip+0x14d/0x15d
 [] nf_iterate+0x41/0x7d
 [] :bridge:br_forward_finish+0x0/0x51
 [] nf_hook_slow+0x58/0xbc
 [] :bridge:br_forward_finish+0x0/0x51
 [] :bridge:__br_forward+0x0/0x6d
 [] :bridge:__br_forward+0x59/0x6d
 [] netif_be_start_xmit+0x0/0x4a6
 [] :bridge:br_flood+0x7d/0xc6
 [] :bridge:br_handle_frame_finish+0xe3/0xf8
 [] :bridge:br_nf_pre_routing_finish+0x2ed/0x2fc
 [] :bridge:br_nf_pre_routing_finish+0x0/0x2fc
 [] nf_hook_slow+0x58/0xbc
 [] :bridge:br_nf_pre_routing_finish+0x0/0x2fc
 [] :bridge:br_nf_pre_routing+0x611/0x62f
 [] nf_iterate+0x41/0x7d
 [] :bridge:br_handle_frame_finish+0x0/0xf8
 [] nf_hook_slow+0x58/0xbc
 [] :bridge:br_handle_frame_finish+0x0/0xf8
 [] notify_remote_via_irq+0x2b/0x67
 [] :bridge:br_handle_frame+0x16e/0x1a2
 [] netif_receive_skb+0x1ca/0x2ea
 [] process_backlog+0xd0/0x182
 [] net_rx_action+0xe3/0x24b
 [] __do_softirq+0x83/0x117
 [] call_softirq+0x1c/0x28
 [] do_softirq+0x6a/0xed
 [] do_hypervisor_callback+0x1e/0x2c
  [] .text.lock.spinlock+0x0/0x8a
 [] key_lookup+0xf/0x68
 [] lookup_user_key+0x13c/0x1e4
 [] keyctl_revoke_key+0x14/0x37
 [] tracesys+0xab/0xb5

DMA per-cpu:
cpu 0 hot: high 0, batch 1 used:0
cpu 0 cold: high 0, batch 1 used:0
cpu 1 hot: high 0, batch 1 used:0
cpu 1 cold: high 0, batch 1 used:0
cpu 2 hot: high 0, batch 1 used:0
cpu 2 cold: high 0, batch 1 used:0
cpu 3 hot: high 0, batch 1 used:0
cpu 3 cold: high 0, batch 1 used:0
cpu 4 hot: high 0, batch 1 used:0
cpu 4 cold: high 0, batch 1 used:0
cpu 5 hot: high 0, batch 1 used:0
cpu 5 cold: high 0, batch 1 used:0
cpu 6 hot: high 0, batch 1 used:0
cpu 6 cold: high 0, batch 1 used:0
cpu 7 hot: high 0, batch 1 used:0
cpu 7 cold: high 0, batch 1 used:0
DMA32 per-cpu:
cpu 0 hot: high 186, batch 31 used:30
cpu 0 cold: high 62, batch 15 used:11
cpu 1 hot: high 186, batch 31 used:178
cpu 1 cold: high 62, batch 15 used:24
cpu 2 hot: high 186, batch 31 used:168
cpu 2 cold: high 62, batch 15 used:14
cpu 3 hot: high 186, batch 31 used:160
cpu 3 cold: high 62, batch 15 used:12
cpu 4 hot: high 186, batch 31 used:97
cpu 4 cold: high 62, batch 15 used:5
cpu 5 hot: high 186, batch 31 used:146
cpu 5 cold: high 62, batch 15 used:0
cpu 6 hot: high 186, batch 31 used:83
cpu 6 cold: high 62, batch 15 used:5
cpu 7 hot: high 186, batch 31 used:123
cpu 7 cold: high 62, batch 15 used:2
Normal per-cpu: empty
HighMem per-cpu: empty
Free pages:        5424kB (0kB HighMem)
Active:36633 inactive:756 dirty:18 writeback:3 unstable:0 free:1356 slab:28807 mapped:918 pagetables:1660
DMA free:4012kB min:8kB low:8kB high:12kB active:164kB inactive:0kB present:2120kB pages_scanned:8415 all_unreclaimable? yes
lowmem_reserve[]: 0 1002 1002 1002
DMA32 free:1412kB min:4044kB low:5052kB high:6064kB active:146368kB inactive:3024kB present:1026160kB pages_scanned:21160 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
Normal free:0kB min:0kB low:0kB high:0kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
HighMem free:0kB min:128kB low:128kB high:128kB active:0kB inactive:0kB present:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 1*4kB 1*8kB 0*16kB 1*32kB 0*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 4012kB
DMA32: 1*4kB 0*8kB 0*16kB 0*32kB 0*64kB 1*128kB 1*256kB 0*512kB 1*1024kB 0*2048kB 0*4096kB = 1412kB
Normal: empty
HighMem: empty
Swap cache: add 0, delete 0, find 0/0, race 0+0
Free swap  = 0kB
Total swap = 0kB
Free swap:            0kB
264192 pages of RAM
22597 reserved pages
5161 pages shared
0 pages swap cached

I rebooted the box (using Xen's CTRL-A 3 times escape sequence) looks like the people on stables get a free month.

the necessity of Kool-aid

| | Comments (0)

I'm not the type to drink Kool-aid, so in running my business, I've not made any Kool-aid. I try to remain sober and keep in mind my advantages (and my competitor's advantages) when asked, I give an honest overview. This sometimes costs me deals, sometimes it costs me employees, but overall I think maintaining that humility is a win.

Like most technical people, I have a 'there is nothing new under the sun' attitude- almost everything is an incremental improvement over something else (certainly almost all businesses are)

Now, a lot of time that incremental improvement makes a big difference- it makes a lot of sense to focus on improving your incremental improvement as much as possible, and really, it makes sense to focus on marketing that incremental improvement, as that is the value you are providing over your competitors.

But many companies seem to want to put themselves forward as some kind of revolution... sometimes they ignore the good parts of how things have always been done and end up with a product that is better in some ways, but that fails in some of the basic ways that they would not have failed in had they stuck with 'what exists now plus our incremental improvement' rather than trying to re-invent wheels.

Other companies do the (I think rational) 'take what is common now, add some incremental improvements, sell' but then market it as if it was something completely new.

I give the example here of the so-called "cloud computing" providers- (You probably shouldn't call a cloud computing provider, not until we slightly improve our provisioning system, at least. I've been looking at eucalyptus as the way forward, as I think an API that is compatible with several providers is essential to providing a product that is actually useful to consumers.)

Cloud computing is [virtual] dedicated servers, with the incremental improvement of a nice provisioning system. Now, that's a real and very useful incremental improvement, (and it does enable some fundamental changes in how data center space is thought about and managed) but you can take what you are doing on your current [virtual] dedicated servers and put it on a server at a 'cloud computing' provider with no other changes. Assuming the cloud computing provider gives you static IP addresses (most do, frontend servers without a stable set of IPs is generally considered a Bad Idea)

(this is why I see ec2 as a competitor even though they are a 'cloud' and I am not. I realize they don't know I exist, yet. But that's ok. I was here first, but nobody calls me a serious business guy. It's been over five years since I've worn a tie.)

Now, this incremental improvement of good provisioning is something that you have been able to implement on your own with your own hardware for a long time now. look at tools like cobbler and koan. (or really, if you need to clone your servers, systemimager.) Functionally about the same thing, but you needed access to your own boot server infrastructure. The new part is that there is now an easy way to do this with a small number of servers, and the cloud computing providers maintain the provisioning system for you (rather than you maintaining your own boot sytem, using systemimager or whatever other tools you like.)

but setting up a good provisioning system used to require a good bit of server and sysadmin infrastructure. the cloud computing providers have removed most of that barrier. (you still need SysAdmin resources on the application level to scale- taking a webapp from running on one server to running on two servers requires application level thought. Depending on the application, it goes from trivial to very hard. But that is a problem that must be solved at the application level.)

Charity and advertising

| | Comments (0)

Running a business, of course, I need advertising. Now, personally I think that charity and advertising can go hand in hand. Doing something good often gets you press in ways that are more valuable than the kind of press you can directly buy. A linux user group saying that they host on my server is probably worth quite a lot of those pay-per-click search ads. It is cheaper for me, usually, and it supports causes I like. I like Open-Source software, and I recognise that it does need some support from commercial entities, and I also recognise that commercial entities like would not exist without open-source software. (for that matter, I wouldn't be able to do my dayjob without open-source software. Open-source is what allows me to be drastically more productive (and thus get paid more) than a windows reboot monkey.)

If you are running a computer or open-source related project that is generally not profit-seeking (you don't need to officially be non-profit... I'm writing off the cost of hosting you as an advertising expense, rather than writing off the retail value of the package as a charitable donation. Smaller writeoff for me, but much easier to defend.) email, and maybe we can reach an agreement. At this moment, I'm not in a position to hand out free images that consume more than 10Mbps on the 95th percentile, but I hope to change that soon.

So a customer was complaining ssh sessions to his VPS were timing out. I've hit this problem several times myself, so I thought I'd write a bit about it.

The problem is usually a NAT with an aggressive TCP session timeout (large corporate NATs are notorious for this. The thing is, the NAT must keep track of every TCP session, so it times out sessions after a certain period without a packet.) Of course, if you have control over the NAT, you can usually set the timeout long enough that you don't have a problem. Of course, many low-end NAT boxes don't allow you to adjust this setting, and in many corporate environments, you don't have access to that setting.

Remember kids, lobby for IPv6, or else we will all be behind giant NAT boxes at our ISP that we don't control.

However, if you are, for whatever reason, trapped behind a NAT you do not control, you can use the ssh client keepalive to send packets when your session is idle. Now, the ssh client keepalive was designed to kill the connection when the TCP stream is interrupted (the idea is that if nobody sends any packets, a TCP session can stay 'open' for many days, even if one of the devices is disconnected from the network. the client keepalive sends a packet over the encrypted connection every ClientAliveInterval seconds.

             Sets a timeout interval in seconds after which if no data has been
             received from the client, sshd will send a message through the
             encrypted channel to request a response from the client.  The
             default is 0, indicating that these messages will not be sent to the
             client.  This option applies to protocol version 2 only.

Of course, if you are on a connection that drops a lot of packets (like my cellphone) this can present another problem, namely that a few dropped packets kill your ssh connection. There is another setting that can be used to mitigate that to some extent:
             Sets the number of client alive messages (see below) which may be
             sent without sshd receiving any messages back from the client.  If
             this threshold is reached while client alive messages are being
             sent, sshd will disconnect the client, terminating the session.  It
             is important to note that the use of client alive messages is very
             different from TCPKeepAlive (below).  The client alive messages are
             sent through the encrypted channel and therefore will not be spoofa-
             ble.  The TCP keepalive option enabled by TCPKeepAlive is spoofable.
             The client alive mechanism is valuable when the client or server
             depend on knowing when a connection has become inactive.

             The default value is 3.  If ClientAliveInterval (see below) is set
             to 15, and ClientAliveCountMax is left at the default, unresponsive
             ssh clients will be disconnected after approximately 45 seconds.

so personally, at the bottom of my /etc/ssh/sshd_config file, I have the following:
ClientAliveInterval 100
ClientAliveCountMax 1000

it works well enough
so yeah, remember I asked about svtix a while back? Well, it looks like I'm pulling the trigger on that one. I'm getting me a full rack, with 20A of 110V power. I really only need about half that right now, but for another $150 or so on the half rack I really need, I can get a full rack.

I was going to just split this 50-50 with another guy, but it seems like he is backing out 'cause his dayjob wants him to focus. I can understand, but eh, when my dayjob asks me to choose between them and, well, the dayjob looses. There's a reason why I call it a dayjob. is what I actually want to do.

Anyhow, my dayjob doesn't seem to be in any danger of doing that. Here's the plan: I'm going to buy the full rack, set it up real nice, and then rent out the space I don't need at near cost.

Now, I've chosen to go with a reseller for svtix. I'm going through egihosting, as their prices are good, and they give me bandwidth at cogent prices.

(not that bandwidth is super-premium, or that Cogent is unacceptable, but Cogent doesn't do IPv6, and does. (Yeah, people care. we have less than 3 years of the status quo, and after that, IPv6 appears to be by far the most pleasant alternative. Not the only alternative, mind you, just the best one. Also, even though bandwidth is very rarely the least reliable part of your data center, if you use cogent, there are certain people who will avoid you for that reason alone.)

Also, and this is perhaps the biggest reason I'm going with egihosting- they are up front. No 'call the salesman and we will fuck around for half an hour trying to figure out how much you can pay' before getting a price bullshit. they put their prices on their website, just like I do. I like that.

Now, I try as much as possible to not be effected by how a place (or a person, for that matter) looks. I rented my first data center sight unseen for this very reason. (it works out rather well when hiring employees, so I thought why not?) this was in Sacramento at heraklesdata a data center I'm almost out of. The problem was that when I showed up to put in my servers, I found that the two post rack I got was less than two feet from a chain link fence, meaning you could only just squeeze by the next guys stuff to get in. ridiculous. so now I go and look.

I visited the svtix data center a few weeks ago, and it looked pretty cool. I show up and there is this squat industrial looking building, with a bunch of generator sheds behind the parking lot, and a yellow civil defense 'fallout shelter' sign. The inside of the data center also looks pretty good. I could bring customers here.

Pricing here is pretty good. $750/month gets me a full rack, 20A of 110V and 10Mbps bandwidth. It appears that additional bandwidth is around $7.5 per Mbps if I commit, and $15/Mbps for overages. Better than I've seen anywhere else (at least anywhere else within driving distance of my place) (oddly they told me in email that their overages were $15/Mbps. I didn't negotiate that or anything, so I guess they need to update the website)

Extra power is $350 for another 20A hookup. Oh, there is a $500 setup fee on top of that. I'd save a little money (and/or have a lot more power) getting a 208V 30A circuit, but I already have rebooters rated for 110V, and with 110V I can wait to pay for the second circuit until I actually need it.

So here is what I'm thinking: the cabinet is 42U, I need 1U for switches, 1u for network/abuse monitoring (snort) 1U for serial console, 2u for routers and 2U for the controller unit for my rebooter, leaving 34U that I could use myself or sell.

First, unless I get a bunch of guys hosting atoms, there's no way I can get 35 servers on a single 20A circuit. so we've got to factor in at least two circuits, three if we are talking dual-CPU servers. so with two circuits, my cost is $750+$350 $1100 a month, or $31.43/month per saleable U, and assuming I use 75% of each 110V, 20A circuit, each one of the 42U (the switches, routers, rebooters, and serial consoles also use power) can use 78w and change.

here's my plan: offer a hosting deal. 1u, 1 socket, including remote power control and a remotely-accessible serial console. $50/month is the number I'm thinking of, but I might have to black list certain extra hungry CPUs at that price. (might not... just if everyone shows up with a server like the 1u core2quad q6600s I used to use that ate north of 100W, I could have a problem.) I'm thinking a $50 setup fee to cover the time of the person who has got to drive down and set you up. I'm hoping that my rebooter and serial console mean that you don't need to go back. If I can, say, have setup day once a week, I could probably reduce that $50 fee somewhat.

The plan is to only offer this a short while, just until I get the parts of the rack I don't need sold. After that I look it over again and see if it makes sense to do another rack that way or not.

at that rate, I'd gross $1700, and net $575/month before support costs on the rack (If I rented the whole thing out, which I won't. I need some of that for servers.)

Now, support for a physical server is not much different from support for a virtual, from a network perspective, so I think I have that covered. (well, I need to spend some more quality time with snort, but I need to do that regardless.)