Hortonworks is seeking an Associate Big Data Developer to join its professional services team in APAC. In this role, you'll have the opportunity to develop massively scalable solutions to solve complex data problems using Hadoop, Spark, and related Big Data technology. This role is a client-facing opportunity that combines consulting skills with deep technical design and development in the Big Data space. This is a great opportunity for a junior software developer looking to gain experience in the Big Data stack, as you will be provided with mentoring, training, and the opportunity to work with large customers across Asia Pacific. Responsibilities: - Work closely with Hortonworks Architects to design and build Big Data solutions on Hadoop - Work directly with customers to implement Big Data solutions at scale using the Hortonworks Data Platform and Hortonworks Dataflow - Help design and implement Hadoop platform architectures and configurations for customers - Work closely with Hortonworks teams at all levels to help ensure the success of project consulting engagements with customers - Write and produce technical documentation, knowledgebase articles - Attend speaking engagements when needed - Keep current with the Hadoop Big Data ecosystem technologies - Travel up to 75% Qualifications: - Aptitude for learning new skills and technology quickly - Demonstrable programming experience (in any language) - Experience in Linux / Unix environments - Strong analytical and problem-solving abilities - Theoretical knowledge of big data / analytics concepts - Proven interest in Big Data development, Data Engineering, or Data Science - Strong communication, consulting, and presentation skills Nice to have (but not required): - Bachelor of Software Engineering / IT / Computer Science or related discipline (Honors/Masters/PHD a plus) - Hands-on development experience with Apache technologies such as Hadoop, Spark, Hbase, Hive, Pig, Storm, Solr, Phoenix, Accumulo, Falcon, Sqoop, Flume, Kafka, Oozie - Experience with open-source software - Experience setting up multi-node Hadoop clusters - Strong SQL skills - Java, C++, Python, and/or R development - Experience with NoSQL data technology - Experience in any cloud environment (AWS, Azure, Google Cloud, SoftLayer, OpenStack) - Understanding of various enterprise security solutions (LDAP and/or Kerberos) - Understanding of Networking concepts - Understanding of Site Reliability Engineering concepts and practices - Experience with distributed systems, machine learning, or artificial intelligence algorithms - Ability to understand big data use-cases and recommend standard design patterns commonly used in Hadoop-based deployments - Knowledge of the data management eco-system including: Concepts of data warehousing, ETL, data integration, data engineering, etc.,
As an Associate Big Data Developer at Hortonworks, you will be a part of the professional services team in APAC. Your main responsibility will be to develop massively scalable solutions using Hadoop, Spark, and other Big Data technologies to solve complex data problems. This client-facing role will require your consulting skills along with deep technical design and development expertise in the Big Data domain. **Responsibilities:** - Work closely with Hortonworks Architects to design and build Big Data solutions on Hadoop - Implement Big Data solutions at scale for customers using the Hortonworks Data Platform and Hortonworks Dataflow - Assist in designing and implementing Hadoop platform architectures and configurations - Collaborate with Hortonworks teams to ensure the success of project consulting engagements - Produce technical documentation and knowledgebase articles - Participate in speaking engagements as required - Stay updated with the latest technologies in the Hadoop Big Data ecosystem - Travel up to 75% **Qualifications:** - Quick learner with aptitude for new skills and technologies - Demonstrable programming experience in any language - Proficiency in Linux / Unix environments - Strong analytical and problem-solving skills - Theoretical knowledge of big data and analytics concepts - Interest in Big Data development, Data Engineering, or Data Science - Excellent communication, consulting, and presentation abilities **Nice to have (but not required):** - Bachelor's degree in Software Engineering / IT / Computer Science or related field - Hands-on experience with Apache technologies like Hadoop, Spark, Hbase, etc. - Familiarity with open-source software and setting up multi-node Hadoop clusters - Strong SQL skills - Development experience in Java, C++, Python, and/or R - Exposure to NoSQL data technology and cloud environments - Understanding of enterprise security solutions and networking concepts - Knowledge of distributed systems, machine learning, and AI algorithms - Familiarity with big data use-cases and standard design patterns in Hadoop-based deployments - Understanding of data management concepts like data warehousing, ETL, and data integration Hortonworks is dedicated to creating, distributing, and supporting enterprise-ready open data platforms and modern data applications. With a focus on innovation in open source communities like Apache Hadoop, NiFi, and Spark, Hortonworks collaborates with over 1,600 partners to provide expertise, training, and services for organizations looking to unlock transformational value across various business lines. For more information about Hortonworks and its products, you can visit their [official website](http://hortonworks.com/about-us/quick-facts/).,