A common problem when starting a new project is getting fixtures in place to facilitate testing of reporting functionality and refining data models. To ease this, I’ve created a PDI job that creates the dimension tables, and populates a fact table.
Here is an alias that I’ve used often to view packet payloads using tcpdump which filters out all the overhead packets (just contains payloads).
I usually stick the following lines into my .bashrc on all the servers I install.
alias tcpdump_http="tcpdump 'tcp port 80 and (((ip[2:2] - ((ip&0xf)<<2)) - ((tcp&0xf0)>>2)) != 0)' -A -s0"
alias tcpdump_http_inbound="tcpdump 'tcp dst port 80 and (((ip[2:2] - ((ip&0xf)<<2)) - ((tcp&0xf0)>>2)) != 0)' -A -s0"
alias tcpdump_http_outbound="tcpdump 'tcp src port 80 and (((ip[2:2] - ((ip&0xf)<<2)) - ((tcp&0xf0)>>2)) != 0)' -A -s0"
You can pass as argument the interface you want to listen on (defaults to eth0) via a ‘-i eth0:1’ for example. It snarfs in the payload, so it’s easy to follow what’s going on.
An equally viable alternative is to install tcpflow.
As an exercise to keep my mind nimble, here.s a write-up on how to use the power of computers to take over the world by out-foxing those slow moving meatbags who do stock trading and compete with skynet on making the most possible profit.
The pieces of this puzzle are:
A messaging backbone (we.ll use AMQP with the RabbitMQ broker)
A complex event processing engine (Esper)
A way to express our greed (EPL statements)
A software that ties this all together called new-hope (partially written by yours truly)
Under normal circumstances, master servers in a replication can be setup to automatically rotate binary logs using the expire_logs_days my.cnf configuration setting.
However when it is known that slaves are in sync, it can be beneficial to pro-actively reduce on-disk size using compression. This can be especially useful in high-churn environments where binary logs grow quickly.