things not to type into google (part 2)


"man kill"

to add to my previous list of

"man cut"
"man head"


I think a theme is developing here.




what does java give you? (or jvm type technologies)

It gives you a debugging entry point, at a level which is much deeper than inline print "$myvar" crap, and above that of the pages of syscalls verbiage which is underlying scripting implementations.

And you get all the associated JMX, Heap and thread dumping tools to use whatever the language ie Java, Scala, jruby, jython or groovy etc.


Whats the deal with hosting multiple SSL enabled websites?


So yet more re-purposed content from my serverfault.com procrastinations. This one was about how to stick a second SSL virtual host on apache, but ended up taking a brief look at the state of "Server Name Indication" technology which allows name based hosting with TLS, and also when wildcard SSL certs can be used...


Solution for Reverse Proxying onto URLs that are not amenable to being relocated to a subdirectory

I posted this question on serverfault.com, and it turned into "War & Peace" so I decided to duplicate it here to capture my on going battle with the problem.
I am also motivated to do this because stackexchange have implemented some annoying "community wiki" feature which declares posts to be "community owned" after a certain number of edits, which is most annoying because it steals any reputation points that you might get after that point.

My work-around to the community wiki problem is to maintain the question and answer content for editing elsewhere, so you reduce the number of individual "edits" that serverfault.com sees to below the cut off.

I am starting to believe the old adage that you don't know a system well, until you know something about it that really annoys you. ;-)

wtf is elasticsearch?


elasticsearch is the back-end used in the centralized logging getting started tutorial on the logstash.net site.

So from the front page blurb.. "It is an Open Source (Apache 2), Distributed, RESTful, Search Engine built on top of Apache Lucene. "

Basically you chuck JSON data into elasticsearch, and use lucene queries, or some JSON dsl to request data back. Its all RESTful, so you can look at the stuff in a browser;
http://localhost:9200

or you can use wget or curl, as you prefer...
curl -XGET http://localhost:9200/twitter/tweet/2
 
 
I was a little perplexed by all the multicast fuckery that I was getting when trying to use the non-embedded version of elasticsearch shipped with logstash, but now I have read the docs, I can see why the clustering makes sense.

# ElasticSearch, by default, binds itself to the 0.0.0.0 address, and listens
# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node
# communication. (the range means that if the port is busy, it will automatically
# try the next port).


Persistence and data
elasticsearch uses the notion of a gateway.
http://www.elasticsearch.org/guide/reference/setup/dir-layout.html
By default elastic search has persistence of data and indexes (??) under the elasticsearch/data/ dir of the unpacked logstash


elastic search uses 9200+ for httpd and 9300+ for rcp..?




rabbitmq-server manager
http://opencirrus-g0803.hpl.hp.com:55672/#/




over riding the JAVA_OPTS
http://www.elasticsearch.org/guide/reference/setup/installation.html



elasticsearch front ends
https://github.com/mobz/elasticsearch-head
This is super cool, whether it is any us, is another question. The install was super easy- elasticsearch/bin/plugin -install mobz/elasticsearch-head

https://github.com/lukas-vlcek/bigdesk


java service wrapper
http://wrapper.tanukisoftware.com/doc/english/download.jsp


service wrapper on github
https://github.com/elasticsearch/elasticsearch-servicewrapper


elasticsearch rpm spec files
https://github.com/tavisto/elasticsearch-rpms

elastic search chef cookbook
http://community.opscode.com/cookbooks/elasticsearch






centralized logging with logstash

Last week I migrated some services off servers in a rack that was being decommissioned.  There was a distinct lack of system documentation, either up to date or otherwise, so I thought configuring nagios, monit, and munin would be a good start to checking the performance and reliability of the service on the new instance.

It had been suggested that the development team have a log monitoring system based around the log4j library, however it turns out that this system is mostly reactive, and there were a few undocumented configurations that didn't make the migration, and caused some problems. So after some hasty "diff -r" and rsync everything seemed to be well.

But I decided that this weekend I am putting together a central logging management system using logstash and chef, that I can deploy zero config style to hosts via a cookbook recipe, to catch 404 and 503 errors and other alerts coming from remote systems in some timely manner.