Purview India Consulting And Services

Purview India Consulting and Services specializes in strategic consulting and IT services, focusing on delivering tailored solutions for clients in various sectors.

43 Job openings at Purview India Consulting And Services
Senior Data Engineer purandhar 7 - 10 years INR 0.5 - 0.8 Lacs P.A. Hybrid Full Time

Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering. Job Description: As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

Senior Data Engineer baramati 7 - 10 years INR 0.5 - 0.8 Lacs P.A. Hybrid Full Time

Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering. Job Description: As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

Senior Data Engineer indapur 7 - 10 years INR 0.5 - 0.8 Lacs P.A. Hybrid Full Time

Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering. Job Description: As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

Senior Data Engineer ambegaon 7 - 10 years INR 0.5 - 0.8 Lacs P.A. Hybrid Full Time

Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering. Job Description: As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

Senior Data Engineer kheda 7 - 10 years INR 0.5 - 0.8 Lacs P.A. Hybrid Full Time

Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering. Job Description: As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

Senior Data Engineer mawal 7 - 10 years INR 0.5 - 0.8 Lacs P.A. Hybrid Full Time

Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering. Job Description: As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

Senior Data Engineer pimpri-chinchwad 7 - 10 years INR 0.5 - 0.8 Lacs P.A. Hybrid Full Time

Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering. Job Description: As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

Senior Data Engineer daund 7 - 10 years INR 0.5 - 0.8 Lacs P.A. Hybrid Full Time

Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering. Job Description: As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

Senior Data Engineer velhe 7 - 10 years INR 0.5 - 0.8 Lacs P.A. Hybrid Full Time

Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering. Job Description: As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

Senior Data Engineer shirur 7 - 10 years INR 0.5 - 0.8 Lacs P.A. Hybrid Full Time

Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering. Job Description: As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

Senior Data Engineer bhor 7 - 10 years INR 0.5 - 0.8 Lacs P.A. Hybrid Full Time

Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering. Job Description: As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

Senior Data Engineer junnar 7 - 10 years INR 0.5 - 0.8 Lacs P.A. Hybrid Full Time

Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering. Job Description: As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

Senior Data Engineer pune 7 - 10 years INR 0.5 - 0.8 Lacs P.A. Hybrid Full Time

Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering. Job Description: As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

Senior Data Engineer mulshi 7 - 10 years INR 0.5 - 0.8 Lacs P.A. Hybrid Full Time

Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering. Job Description: As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban.

Data Engineer mulug 7 - 12 years INR 8.0 - 18.0 Lacs P.A. Hybrid Full Time

In this role, you will: (Principal Responsibilities) As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban. Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering.

Data Engineer narayanpet 7 - 12 years INR 8.0 - 18.0 Lacs P.A. Hybrid Full Time

In this role, you will: (Principal Responsibilities) As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban. Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering.

Data Engineer mancherial 7 - 12 years INR 8.0 - 18.0 Lacs P.A. Hybrid Full Time

In this role, you will: (Principal Responsibilities) As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban. Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering.

Data Engineer medak 7 - 12 years INR 8.0 - 18.0 Lacs P.A. Hybrid Full Time

In this role, you will: (Principal Responsibilities) As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban. Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering.

Data Engineer mahabubnagar 7 - 12 years INR 8.0 - 18.0 Lacs P.A. Hybrid Full Time

In this role, you will: (Principal Responsibilities) As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban. Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering.

Data Engineer nalgonda 7 - 12 years INR 8.0 - 18.0 Lacs P.A. Hybrid Full Time

In this role, you will: (Principal Responsibilities) As a key member of the technical team alongside Engineers, Data Analysts and Business analysts, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process: Software design, Scala and Python - Spark development experience, automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. To be successful in this role, you should meet the following requirements: (Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban. Senior DE: Same as above but need to have experience managing data engineers, have experience working with stakeholders, actively engaging for key technical discussion and have a very good technical knowledge. Should have at least 7 years experience working in data engineering.

FIND ON MAP

Purview India Consulting And Services