Purview Services

Purview Services provides innovative solutions in data management, helping organizations leverage their data for insightful decision-making.

144 Job openings at Purview Services
Bluecoat Proxy Professional Hubli,Mangaluru,Mysuru,Bengaluru,Belgaum 4 - 8 years INR 6.0 - 10.0 Lacs P.A. Work from Office Full Time

Responsible for managing and configuring Bluecoat Proxy servers to ensure secure and efficient internet access and content filtering across the organization. Monitors network traffic, applies access control policies, and optimizes bandwidth usage. Conducts regular audits of proxy logs, identifies threats or anomalies, and resolves user access issues promptly. Works closely with IT security to block malicious content, enforce web usage policies, and support compliance. Ensures software updates and patches are installed timely. Collaborates with system administrators to integrate proxy settings with network infrastructure and provides technical documentation and end-user training as needed.

Selenium Testing Professional Hubli,Mangaluru,Mysuru,Bengaluru,Belgaum 4 - 9 years INR 6.0 - 11.0 Lacs P.A. Work from Office Full Time

Selenium with strong experience in Cucumber and JAVA Location Bangalore Should be an individual contributor. Strong knowledge on Automation Selenium, Cucumber, Core Java. Should be hands on development experience and have strong knowledge on designing and developing automation frameworks as per seniority. Good to have experience on Devops lifecycle.

Apigee Professioanl Warangal,Hyderabad,Nizamabad 5 - 8 years INR 7.0 - 10.0 Lacs P.A. Work from Office Full Time

Band B2 : 5-8 Years JD : Key skills for API platform engineer requirement. Person who is having 5-8 year s experience in APIGEE. Deep knowledge and hands-on experience with API design patterns and RESTful principles Mature API capabilities that will include security, custom analytics, throttling, caching, logging, request, and response modifications etc. using the API management platform. Hands-on experience in stateless distributed architectures and designing for scalability and performance. Knowledge about API design standards, patterns, and best-practices especially Swagger and Open API 3.0, REST, SOAP, MQ, JSON, Microservices etc. Proven ability to create visual diagrams using Visio or other tools to communicate integration architectures and solutions. Digital Transformation Services for Global IT Solutions - Wipro Digital transformation services enhance business agility. Leverage Wipros IT consulting, FullStride Cloud, Engineering

PIMCore Professional Mumbai,Nagpur,Thane,Nashik,Pune,Aurangabad 3 - 6 years INR 5.0 - 8.0 Lacs P.A. Work from Office Full Time

We have new opening in our company . Looking for a developer/Lead in below skills PIM/Product information management (must skill) MDM/Master Data Management (must skill) Logic Broker/Channel Advisor (nice to have)

AS 400 Vijayawada,Visakhapatnam,Guntur,Nellore 3 - 5 years INR 5.0 - 7.0 Lacs P.A. Work from Office Full Time

Development Specialist you must have: 1. 3-5 Years Hands-on expertise RPG400/IV, RPG Free, CL, SQL, Embedded SQL, Query & ILE 2. Recent and advanced experience with RPG (ILE/FREE) using Procedures, Service Programs, and Functions 3. Hands-on experience of Application Design & Development in BFSI/Banking domain is desirable 4. Experience exposing and consuming web services, JSON, REST APIs is desirable 5. Change management experience and familiarity with a change management tool 6. Understanding of DevOps concepts and techniques 7. Strong understanding and working experience with CI/CD and available tools i.e. usage of Jenkins, Sonar, RDI 8. Working experience in an agile environment 9. Ability to quickly acquire new skills and tools 10. Ability to resolve critical issues in timely manner 11. Be a clear communicator, document your work, share your ideas 12. Review and be reviewed by your peers 13. Experience releasing to production what you developed and then supporting it Candidate profile 1. Be an approachable and supportive team member with a collaborative attitude within a demanding, maturing Agile environment. 2. Influence and champion new ideas and methods 3. Great communication - convey your thoughts, ideas, and opinions clearly and concisely face-to-face or virtually to all levels up and down stream. 4. And equally important - you listen and reflect back what others communicate to you 5. Regularly demonstrate these qualities - drive, motivation, determination, dedication, resiliency, honesty, and enthusiasm 6. Be culturally aware and sensitive 7. Be flexible under pressure 8. Strong analytical and problem solving skills 9. Excellent verbal and written communication skills 10. Excellent organisational and presentation skills 11. Ability to communicate with non-technical people

Java Full Stack Developer Kolkata,Mumbai,New Delhi,Hyderabad,Pune,Chennai,Bengaluru 5 - 10 years INR 7.0 - 12.0 Lacs P.A. Work from Office Full Time

Must have skills: 1. Good experience in backend technologies like Java 8 with lambda, Spring/Springboot/Spring batch,spring jdbc,API development, Able to design and solution independently. expression, SQL. 2. Good experience in Front end techlogies React.JS or Angular.JS (Expert level). 3. AWS expertise- ECS, EC2, build/deployment via Jenkin pipeline, cloud formation, SQS (in AWS).

SAP CRM Technical Professional Warangal,Hyderabad,Nizamabad 8 - 10 years INR 30.0 - 35.0 Lacs P.A. Work from Office Full Time

Minimum 8 to 10 years of ABAP experience. At least 5+ years of experience in ABAP CRM. Should have good communication skills. Have worked in at least one CRM implementation Project (Complete cycle). Knowledge of the MVC architecture and One Order framework. Strong working knowledge in CRM UI BOL\Genil Programming. Hands on experience in ABAP Report programming, Function modules and Data dictionary objects. Aware of OO ABAP concepts with Hands on experience. Aware of the Middleware concepts (like BDOCs and Inbound\Outbound queues). Worked on BADI s, Actions and Interfaces (Inbound and Outbound), Proxies. Experience of CRM ISU will be added advantage. (Aware of Business Master Data and Technical Master Data) Having experience on ODATA service / Fiori Concepts / Fiori Launchpad and analysing the web-service issues will be added advantage. Having experience of working in the replication issues will be added advantage. (like using tcode: ecrmrepl) Team handling and coordination experience is must. Knowledge of BRF+ and BPEM will be added advantage. Should be comfortable with CRM Base customization and product configuration. Should be comfortable in tracing the issues and performance tuning the code. Should be comfortable in S4HANA concepts, CDS Views, AMDP, WEBIDE, Table functions, SADL exits, HANA studio, Exception aggregation. S4HANA Migration and Upgrade knowledge will be added advantage.

DEV/OPS SRE Professionals Bengaluru 4 - 9 years INR 8.0 - 12.0 Lacs P.A. Work from Office Full Time

JD for DevOps Engineer Total Exp Required 8 + Relevant Exp required- 5+ Work Location PAN INDIA- WIPRO OFFICE (HYBRID POLICY ) Mandatory skills required- KUBERNETES(HELM CHARTS)- Very much mandatory skill, SPRING BOOT ,JAVA, ACTIVE MQ Good to have skills required- Required skillset : DEV/OPS SRE - software engineer on existing platform API, Kubernetes, Springboot, JavaEE , ActiveMQ, ElasticSearch, MSSQL, CICD, ARGOCD, Terraform, keycloack, AWS, AZURE, Java

Big Data Engineer mawal 6 - 11 years INR 15.0 - 25.0 Lacs P.A. Work from Office Full Time

Design, build, and maintain scalable ETL/ELT pipelines using AWS Glue, Lambda, Step Functions, and Apache Spark . Develop and optimize data lake and data warehouse architectures using AWS S3, Redshift, and Athena . Implement data ingestion, transformation, and storage solutions using AWS Glue, Kinesis, and Kafka . Work with structured and unstructured data to develop efficient data pipelines . Optimize query performance in SQL-based and NoSQL databases (Redshift, DynamoDB, Aurora, PostgreSQL, etc.). Ensure data security, compliance, and governance using IAM roles, KMS encryption, and AWS Lake Formation . Automate data workflows using Terraform, CloudFormation, or CDK . Monitor and troubleshoot data pipelines, ensuring high availability and performance. Collaborate with Data Scientists, Analysts, and Engineers to provide robust data solutions. Required Skills: Strong experience with AWS Data Services (Glue, Redshift, S3, Lambda, Kinesis, Athena). Proficiency in Python, SQL, and Scala Experience with ETL/ELT pipeline design and data warehousing concepts. Strong knowledge of relational and NoSQL databases (PostgreSQL, MySQL, DynamoDB). Familiarity with DevOps practices and CI/CD tools (Jenkins, GitHub Actions). Experience with Kafka or Kinesis for real-time data streaming. Knowledge of Terraform, CloudFormation, or CDK for infrastructure automation. Hands-on experience with data security and governance . Strong analytical and problem-solving skills

Big Data Engineer purandhar 6 - 11 years INR 15.0 - 25.0 Lacs P.A. Work from Office Full Time

Design, build, and maintain scalable ETL/ELT pipelines using AWS Glue, Lambda, Step Functions, and Apache Spark . Develop and optimize data lake and data warehouse architectures using AWS S3, Redshift, and Athena . Implement data ingestion, transformation, and storage solutions using AWS Glue, Kinesis, and Kafka . Work with structured and unstructured data to develop efficient data pipelines . Optimize query performance in SQL-based and NoSQL databases (Redshift, DynamoDB, Aurora, PostgreSQL, etc.). Ensure data security, compliance, and governance using IAM roles, KMS encryption, and AWS Lake Formation . Automate data workflows using Terraform, CloudFormation, or CDK . Monitor and troubleshoot data pipelines, ensuring high availability and performance. Collaborate with Data Scientists, Analysts, and Engineers to provide robust data solutions. Required Skills: Strong experience with AWS Data Services (Glue, Redshift, S3, Lambda, Kinesis, Athena). Proficiency in Python, SQL, and Scala Experience with ETL/ELT pipeline design and data warehousing concepts. Strong knowledge of relational and NoSQL databases (PostgreSQL, MySQL, DynamoDB). Familiarity with DevOps practices and CI/CD tools (Jenkins, GitHub Actions). Experience with Kafka or Kinesis for real-time data streaming. Knowledge of Terraform, CloudFormation, or CDK for infrastructure automation. Hands-on experience with data security and governance . Strong analytical and problem-solving skills

Big Data Engineer baramati 6 - 11 years INR 15.0 - 25.0 Lacs P.A. Work from Office Full Time

Design, build, and maintain scalable ETL/ELT pipelines using AWS Glue, Lambda, Step Functions, and Apache Spark . Develop and optimize data lake and data warehouse architectures using AWS S3, Redshift, and Athena . Implement data ingestion, transformation, and storage solutions using AWS Glue, Kinesis, and Kafka . Work with structured and unstructured data to develop efficient data pipelines . Optimize query performance in SQL-based and NoSQL databases (Redshift, DynamoDB, Aurora, PostgreSQL, etc.). Ensure data security, compliance, and governance using IAM roles, KMS encryption, and AWS Lake Formation . Automate data workflows using Terraform, CloudFormation, or CDK . Monitor and troubleshoot data pipelines, ensuring high availability and performance. Collaborate with Data Scientists, Analysts, and Engineers to provide robust data solutions. Required Skills: Strong experience with AWS Data Services (Glue, Redshift, S3, Lambda, Kinesis, Athena). Proficiency in Python, SQL, and Scala Experience with ETL/ELT pipeline design and data warehousing concepts. Strong knowledge of relational and NoSQL databases (PostgreSQL, MySQL, DynamoDB). Familiarity with DevOps practices and CI/CD tools (Jenkins, GitHub Actions). Experience with Kafka or Kinesis for real-time data streaming. Knowledge of Terraform, CloudFormation, or CDK for infrastructure automation. Hands-on experience with data security and governance . Strong analytical and problem-solving skills

Big Data Engineer indapur 6 - 11 years INR 15.0 - 25.0 Lacs P.A. Work from Office Full Time

Design, build, and maintain scalable ETL/ELT pipelines using AWS Glue, Lambda, Step Functions, and Apache Spark . Develop and optimize data lake and data warehouse architectures using AWS S3, Redshift, and Athena . Implement data ingestion, transformation, and storage solutions using AWS Glue, Kinesis, and Kafka . Work with structured and unstructured data to develop efficient data pipelines . Optimize query performance in SQL-based and NoSQL databases (Redshift, DynamoDB, Aurora, PostgreSQL, etc.). Ensure data security, compliance, and governance using IAM roles, KMS encryption, and AWS Lake Formation . Automate data workflows using Terraform, CloudFormation, or CDK . Monitor and troubleshoot data pipelines, ensuring high availability and performance. Collaborate with Data Scientists, Analysts, and Engineers to provide robust data solutions. Required Skills: Strong experience with AWS Data Services (Glue, Redshift, S3, Lambda, Kinesis, Athena). Proficiency in Python, SQL, and Scala Experience with ETL/ELT pipeline design and data warehousing concepts. Strong knowledge of relational and NoSQL databases (PostgreSQL, MySQL, DynamoDB). Familiarity with DevOps practices and CI/CD tools (Jenkins, GitHub Actions). Experience with Kafka or Kinesis for real-time data streaming. Knowledge of Terraform, CloudFormation, or CDK for infrastructure automation. Hands-on experience with data security and governance . Strong analytical and problem-solving skills

Big Data Engineer ambegaon 6 - 11 years INR 15.0 - 25.0 Lacs P.A. Work from Office Full Time

Design, build, and maintain scalable ETL/ELT pipelines using AWS Glue, Lambda, Step Functions, and Apache Spark . Develop and optimize data lake and data warehouse architectures using AWS S3, Redshift, and Athena . Implement data ingestion, transformation, and storage solutions using AWS Glue, Kinesis, and Kafka . Work with structured and unstructured data to develop efficient data pipelines . Optimize query performance in SQL-based and NoSQL databases (Redshift, DynamoDB, Aurora, PostgreSQL, etc.). Ensure data security, compliance, and governance using IAM roles, KMS encryption, and AWS Lake Formation . Automate data workflows using Terraform, CloudFormation, or CDK . Monitor and troubleshoot data pipelines, ensuring high availability and performance. Collaborate with Data Scientists, Analysts, and Engineers to provide robust data solutions. Required Skills: Strong experience with AWS Data Services (Glue, Redshift, S3, Lambda, Kinesis, Athena). Proficiency in Python, SQL, and Scala Experience with ETL/ELT pipeline design and data warehousing concepts. Strong knowledge of relational and NoSQL databases (PostgreSQL, MySQL, DynamoDB). Familiarity with DevOps practices and CI/CD tools (Jenkins, GitHub Actions). Experience with Kafka or Kinesis for real-time data streaming. Knowledge of Terraform, CloudFormation, or CDK for infrastructure automation. Hands-on experience with data security and governance . Strong analytical and problem-solving skills

Big Data Engineer mulshi 6 - 11 years INR 15.0 - 25.0 Lacs P.A. Work from Office Full Time

Design, build, and maintain scalable ETL/ELT pipelines using AWS Glue, Lambda, Step Functions, and Apache Spark . Develop and optimize data lake and data warehouse architectures using AWS S3, Redshift, and Athena . Implement data ingestion, transformation, and storage solutions using AWS Glue, Kinesis, and Kafka . Work with structured and unstructured data to develop efficient data pipelines . Optimize query performance in SQL-based and NoSQL databases (Redshift, DynamoDB, Aurora, PostgreSQL, etc.). Ensure data security, compliance, and governance using IAM roles, KMS encryption, and AWS Lake Formation . Automate data workflows using Terraform, CloudFormation, or CDK . Monitor and troubleshoot data pipelines, ensuring high availability and performance. Collaborate with Data Scientists, Analysts, and Engineers to provide robust data solutions. Required Skills: Strong experience with AWS Data Services (Glue, Redshift, S3, Lambda, Kinesis, Athena). Proficiency in Python, SQL, and Scala Experience with ETL/ELT pipeline design and data warehousing concepts. Strong knowledge of relational and NoSQL databases (PostgreSQL, MySQL, DynamoDB). Familiarity with DevOps practices and CI/CD tools (Jenkins, GitHub Actions). Experience with Kafka or Kinesis for real-time data streaming. Knowledge of Terraform, CloudFormation, or CDK for infrastructure automation. Hands-on experience with data security and governance . Strong analytical and problem-solving skills

Big Data Engineer bhor 6 - 11 years INR 15.0 - 25.0 Lacs P.A. Work from Office Full Time

Design, build, and maintain scalable ETL/ELT pipelines using AWS Glue, Lambda, Step Functions, and Apache Spark . Develop and optimize data lake and data warehouse architectures using AWS S3, Redshift, and Athena . Implement data ingestion, transformation, and storage solutions using AWS Glue, Kinesis, and Kafka . Work with structured and unstructured data to develop efficient data pipelines . Optimize query performance in SQL-based and NoSQL databases (Redshift, DynamoDB, Aurora, PostgreSQL, etc.). Ensure data security, compliance, and governance using IAM roles, KMS encryption, and AWS Lake Formation . Automate data workflows using Terraform, CloudFormation, or CDK . Monitor and troubleshoot data pipelines, ensuring high availability and performance. Collaborate with Data Scientists, Analysts, and Engineers to provide robust data solutions. Required Skills: Strong experience with AWS Data Services (Glue, Redshift, S3, Lambda, Kinesis, Athena). Proficiency in Python, SQL, and Scala Experience with ETL/ELT pipeline design and data warehousing concepts. Strong knowledge of relational and NoSQL databases (PostgreSQL, MySQL, DynamoDB). Familiarity with DevOps practices and CI/CD tools (Jenkins, GitHub Actions). Experience with Kafka or Kinesis for real-time data streaming. Knowledge of Terraform, CloudFormation, or CDK for infrastructure automation. Hands-on experience with data security and governance . Strong analytical and problem-solving skills

Big Data Engineer pimpri-chinchwad 6 - 11 years INR 15.0 - 25.0 Lacs P.A. Work from Office Full Time

Design, build, and maintain scalable ETL/ELT pipelines using AWS Glue, Lambda, Step Functions, and Apache Spark . Develop and optimize data lake and data warehouse architectures using AWS S3, Redshift, and Athena . Implement data ingestion, transformation, and storage solutions using AWS Glue, Kinesis, and Kafka . Work with structured and unstructured data to develop efficient data pipelines . Optimize query performance in SQL-based and NoSQL databases (Redshift, DynamoDB, Aurora, PostgreSQL, etc.). Ensure data security, compliance, and governance using IAM roles, KMS encryption, and AWS Lake Formation . Automate data workflows using Terraform, CloudFormation, or CDK . Monitor and troubleshoot data pipelines, ensuring high availability and performance. Collaborate with Data Scientists, Analysts, and Engineers to provide robust data solutions. Required Skills: Strong experience with AWS Data Services (Glue, Redshift, S3, Lambda, Kinesis, Athena). Proficiency in Python, SQL, and Scala Experience with ETL/ELT pipeline design and data warehousing concepts. Strong knowledge of relational and NoSQL databases (PostgreSQL, MySQL, DynamoDB). Familiarity with DevOps practices and CI/CD tools (Jenkins, GitHub Actions). Experience with Kafka or Kinesis for real-time data streaming. Knowledge of Terraform, CloudFormation, or CDK for infrastructure automation. Hands-on experience with data security and governance . Strong analytical and problem-solving skills

Big Data Engineer velhe 6 - 11 years INR 15.0 - 25.0 Lacs P.A. Work from Office Full Time

Design, build, and maintain scalable ETL/ELT pipelines using AWS Glue, Lambda, Step Functions, and Apache Spark . Develop and optimize data lake and data warehouse architectures using AWS S3, Redshift, and Athena . Implement data ingestion, transformation, and storage solutions using AWS Glue, Kinesis, and Kafka . Work with structured and unstructured data to develop efficient data pipelines . Optimize query performance in SQL-based and NoSQL databases (Redshift, DynamoDB, Aurora, PostgreSQL, etc.). Ensure data security, compliance, and governance using IAM roles, KMS encryption, and AWS Lake Formation . Automate data workflows using Terraform, CloudFormation, or CDK . Monitor and troubleshoot data pipelines, ensuring high availability and performance. Collaborate with Data Scientists, Analysts, and Engineers to provide robust data solutions. Required Skills: Strong experience with AWS Data Services (Glue, Redshift, S3, Lambda, Kinesis, Athena). Proficiency in Python, SQL, and Scala Experience with ETL/ELT pipeline design and data warehousing concepts. Strong knowledge of relational and NoSQL databases (PostgreSQL, MySQL, DynamoDB). Familiarity with DevOps practices and CI/CD tools (Jenkins, GitHub Actions). Experience with Kafka or Kinesis for real-time data streaming. Knowledge of Terraform, CloudFormation, or CDK for infrastructure automation. Hands-on experience with data security and governance . Strong analytical and problem-solving skills

Data Engineer velhe 5 - 10 years INR 10.0 - 20.0 Lacs P.A. Hybrid Full Time

Role Description: Sr. Data Engineer Location: Pune Mode: Hybrid Immediate Joiner/20days Job Description: Senior DE: To be successful in this role, you should meet the following requirements: Should have at least 5+ years experience working in data engineering. (Note: Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. Job responsibilities Software design, Scala and Python - Spark development experience , automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban. This role offers the opportunity to be part of an innovative team, delivering impactful solutions while developing your technical expertise. Share resume at: Pratikshya.nayak@purviewservices.com

Data Engineer ambegaon 5 - 10 years INR 10.0 - 20.0 Lacs P.A. Hybrid Full Time

Role Description: Sr. Data Engineer Location: Pune Mode: Hybrid Immediate Joiner/20days Job Description: Senior DE: To be successful in this role, you should meet the following requirements: Should have at least 5+ years experience working in data engineering. (Note: Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. Job responsibilities Software design, Scala and Python - Spark development experience , automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban. This role offers the opportunity to be part of an innovative team, delivering impactful solutions while developing your technical expertise. Share resume at: Pratikshya.nayak@purviewservices.com

Data Engineer indapur 5 - 10 years INR 10.0 - 20.0 Lacs P.A. Hybrid Full Time

Role Description: Sr. Data Engineer Location: Pune Mode: Hybrid Immediate Joiner/20days Job Description: Senior DE: To be successful in this role, you should meet the following requirements: Should have at least 5+ years experience working in data engineering. (Note: Must have Requirements) Scala and Python development and understanding requirements and come up solutions. Experience using scheduling tools such as Airflow. Experience with most of the following technologies (Apache Hadoop, Pyspark, Apache Spark, YARN, Hive, Python, ETL frameworks, Map Reduce, SQL, RESTful services). Sound knowledge of working in Unix/Linux Platform Hands-on experience building data pipelines using Hadoop components - Hive, Spark, Spark SQL. Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA. Understanding of big data modelling techniques using relational and non-relational techniques Experience on debugging code issues and then publishing the highlighted differences to the development team. Flexible to adapt to new tooling. Job responsibilities Software design, Scala and Python - Spark development experience , automated testing of new and existing components in an Agile, DevOps and dynamic environment. Minimum 1 year experience in Scala Promoting development standards, code reviews, mentoring, knowledge sharing Production support & troubleshooting. Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Participation in regular planning and status meetings. Input to the development process through the involvement in Sprint reviews and retrospectives. Input into system architecture and design. The successful candidate will also meet the following requirements: (Good to have Requirements) Experience with Elastic search. Experience developing in Java APIs. Experience doing ingestions. Understanding or experience of Cloud design pattern GCP Development experience Exposure to DevOps & Agile Project methodology such as Scrum and Kanban. This role offers the opportunity to be part of an innovative team, delivering impactful solutions while developing your technical expertise. Share resume at: Pratikshya.nayak@purviewservices.com

FIND ON MAP

Purview Services