Job
Description
The Applications Development Intermediate Programmer Analyst position is a role at an intermediate level where you will be responsible for participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. Your responsibilities will include utilizing your knowledge of applications development procedures and concepts, as well as basic knowledge of other technical areas to identify and define necessary system enhancements. This will involve using script tools, analyzing/interpreting code, consulting with users, clients, and other technology groups on issues, recommending programming solutions, installing, and supporting customer exposure systems. You will also need to apply fundamental knowledge of programming languages for design specifications, analyze applications to identify vulnerabilities and security issues, conduct testing and debugging, and serve as an advisor or coach to new or lower-level analysts. In this role, you will need to identify problems, analyze information, make evaluative judgments to recommend and implement solutions, resolve issues by identifying and selecting solutions through the application of acquired technical experience, operate with a limited level of direct supervision, and exercise independence of judgment and autonomy. Additionally, you will act as a Subject Matter Expert to senior stakeholders and/or other team members. You should have 4-6 years of proven experience in developing and managing Big data solutions using Apache Spark and Scala. It is essential to have a strong hold on Spark-core, Spark-SQL, and Spark Streaming, along with strong programming skills in Scala, Java, or Python. Hands-on experience with technologies like Apache Hive, Apache Kafka, HBase, Couchbase, Sqoop, Flume, etc., proficiency in SQL, experience with relational databases (Oracle/PL-SQL), and familiarity with data warehousing concepts and ETL processes are required. You should also have experience in performance tuning of large technical solutions, knowledge of data modeling, data architecture, data integration techniques, and best practices for data security, privacy, and compliance. Furthermore, experience with JAVA, Web services, Microservices, SOA, Apache Spark, Hive, SQL, and the Hadoop ecosystem is necessary. You should have experience with developing frameworks and utility services, delivering high-quality software following continuous delivery, and using code quality tools. Experience in creating large-scale, multi-tiered, distributed applications with Hadoop and Spark, as well as knowledge of implementing different data storage solutions, is also expected. The ideal candidate will have a Bachelor's degree or University degree or equivalent experience. Please note that this job description provides a high-level overview of the work performed, and other job-related duties may be assigned as required.,