8+ years people or technical leadership or equivalent experience leading a team or working in a team oriented collaborative environment
BS/MS or other relevant technical degree
5+ years of solid Hadoop experience working with large data sets as well as Hadoop cluster environments.
Familiar with Hadoop tools/languages such as Python, Pig, Hive, Hadoop Streaming, Sqoop, Oozie, Spark, Flume, Kafka, etc.
Solid understanding of MapReduce and how the Hadoop Distributed File System works
Experience with Statistical Modeling, Data Mining and/or Machine Learning is a plus
Understanding of enterprise ETL tools such as Ab Initio/Informatica/Talend is beneficial
Understanding of Hadoop Infrastructure is preferable but not required
Hands-on experience with related/complementary open source software platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef, Puppet)
Understanding of Relational Databases (RDBMS), SQL & No-SQL (Mongo, Neo4J) databases
Strong analytical skills, problem solver, and attention to detail
Results driven and ability to coordinate and prioritize multiple tasks in a fast-paced environment
Motivated self-starter who can take charge of complex tasks with limited direction
Strong verbal/written communication and presentation skills
Hadoop cluster, ETL tools, Java, Linux, Apache, Perl/Python/PHP, Chef, Puppet
Multiple Openings