Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 9.0 years
0 Lacs
india
On-site
Role: Lead Data Engineer Location: kochi/Trivandrum Experience: 7+ years Skill Set: Python, pyspark, AWS Budget: Max 25lpa Job Overview We are seeking an experienced Senior Data Engineer to lead the development of a scalable data ingestion framework while ensuring high data quality and validation. The successful candidate will also be responsible for designing and implementing robust APIs for seamless data integration. This role is ideal for someone with deep expertise in building and managing big data pipelines using modern AWS-based technologies, and who is passionate about driving quality and efficiency in data processing systems. Key Responsibilities Data Ingestion Framework: o Design & Development: Architect, develop, and maintain an end-to-end data ingestion framework that efficiently extracts, transforms, and loads data from diverse sources. o Framework Optimization: Use AWS services such as AWS Glue, Lambda, EMR, ECS , EC2 and Step Functions to build highly scalable, resilient, and automated data pipelines. Data Quality & Validation: o Validation Processes: Develop and implement automated data quality checks, validation routines, and error-handling mechanisms to ensure the accuracy and integrity of incoming data. o Monitoring & Reporting: Establish comprehensive monitoring, logging, and alerting systems to proactively identify and resolve data quality issues. API Development: o Design & Implementation: Architect and develop secure, high-performance APIs to enable seamless integration of data services with external applications and internal systems. o Documentation & Best Practices: Create thorough API documentation and establish standards for API security, versioning, and performance optimization. Collaboration & Agile Practices: o Cross-Functional Communication: Work closely with business stakeholders, data scientists, and operations teams to understand requirements and translate them into technical solutions. o Agile Development: Participate in sprint planning, code reviews, and agile ceremonies, while contributing to continuous improvement initiatives and CI/CD pipeline development (using tools like GitLab). Required Qualifications Experience & Technical Skills: o Professional Background: At least 5 years of relevant experience in data engineering with a strong emphasis on analytical platform development. o Programming Skills: Proficiency in Python and/or PySpark, SQL for developing ETL processes and handling large-scale data manipulation. o AWS Expertise: Extensive experience using AWS services including AWS Glue, Lambda, Step Functions, and S3 to build and manage data ingestion frameworks. o Data Platforms: Familiarity with big data systems (e.g., AWS EMR, Apache Spark, Apache Iceberg) and databases like DynamoDB, Aurora, Postgres, or Redshift. o API Development: Proven experience in designing and implementing RESTful APIs and integrating them with external and internal systems. o CI/CD & Agile: Hands-on experience with CI/CD pipelines (preferably with GitLab) and Agile development methodologies. Soft Skills: o Strong problem-solving abilities and attention to detail. o Excellent communication and interpersonal skills with the ability to work independently and collaboratively. o Capacity to quickly learn and adapt to new technologies and evolving business requirements. Preferred Qualifications Bachelors or Masters degree in Computer Science, Data Engineering, or a related field. Experience with additional AWS services such as Kinesis, Firehose, and SQS. Familiarity with data lakehouse architectures and modern data quality frameworks. Prior experience in a role that required proactive data quality management and API- driven integrations in complex, multi-cluster environments. Show more Show less
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Senior Cloud Data Developer, you will play a crucial role in bridging the gap between data engineering and cloud-native application development. Your primary responsibility will be to design and maintain scalable data solutions in the cloud by leveraging cutting-edge technologies and modern development practices. You will be tasked with developing data-centric applications using Java Spring Boot and various AWS services. Your expertise will be essential in creating and managing efficient data pipelines, ETL processes, data workflows, and orchestrating data processing systems. Real-time data processing solutions using AWS SNS/SQS and AWS Pipes will be a key part of your role. Additionally, you will be responsible for optimizing data storage solutions using AWS Aurora and S3, as well as managing data discovery and metadata using AWS Glue Data Catalog. Your skills will also be utilized to create search and analytics solutions using AWS OpenSearch Service and to design event-driven architectures for data processing. To excel in this role, you must have a strong proficiency in Java and the Spring Boot framework. Extensive experience with various AWS data services such as AWS EMR, AWS Glue Data Catalog, AWS OpenSearch Service, AWS Aurora, and AWS S3 is essential. Your expertise in data pipeline development using tools like Apache NiFi, AWS MWAA, AWS Pipes, and AWS SNS/SQS will be highly valuable in fulfilling the technical requirements of this position.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
Join us as a Data Engineer. You'll be the voice of our customers, using data to tell their stories and put them at the heart of all decision-making. We will look to you to drive the build of effortless, digital-first customer experiences. If you're ready for a new challenge and want to make a far-reaching impact through your work, this could be the opportunity you're looking for. This role is based in India, and as such, all normal working days must be carried out in India. We're offering this role at vice president level. As a Data Engineer, you'll be looking to simplify our organization by developing innovative data-driven solutions through data pipelines, modeling, and ETL design, aiming to be commercially successful while keeping our customers and the bank's data safe and secure. You'll drive customer value by understanding complex business problems and requirements to correctly apply the most appropriate and reusable tool to gather and build data solutions. You'll support our strategic direction by engaging with the data engineering community to deliver opportunities, along with carrying out complex data engineering tasks to build a scalable data architecture. Your responsibilities will also include building advanced automation of data engineering pipelines through the removal of manual stages, embedding new data techniques into our business through role modeling, training, and experiment design oversight, delivering a clear understanding of data platform costs to meet your department's cost-saving and income targets, sourcing new data using the most appropriate tooling for the situation, and developing solutions for streaming data ingestion and transformations in line with our streaming strategy. To thrive in this role, you'll need a strong understanding of data usage and dependencies and experience of extracting value and features from large-scale data. You'll also bring practical experience of programming languages alongside knowledge of data and software engineering fundamentals. Additionally, you'll need experience of ETL technical design, data quality testing, cleansing and monitoring, data sourcing, and exploration and analysis, experience in operational resilience including incident management, incident detection, resolution, reporting, and optimization, a good understanding of data pipeline failures, experience of working on ServiceNow, Airflow, Stream sets, AWS EMR, GitLab, and Snowflake, and strong communication skills with the ability to proactively engage and manage a wide range of stakeholders.,
Posted 2 weeks ago
2.0 - 6.0 years
12 - 16 Lacs
kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / Data Bricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 2 weeks ago
2.0 - 6.0 years
12 - 16 Lacs
kochi
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Re - write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 2 weeks ago
16.0 - 21.0 years
18 - 22 Lacs
gurugram
Work from Office
About the Role: OSTTRA India The Role Enterprise Architect - Integration The Team The OSTTRA Technology teamis composed of Capital Markets Technology professionals, who build,supportand protect the applications that operate our network. The technology landscapeincludeshigh-performance, high-volume applications as well as compute intensive applications,leveragingcontemporary microservices, cloud-based architectures. The Impact: Together, we build, support, protect and manage high-performance, resilient platforms that process more than 100 million messages a day. Our services are vital to automated trade processing around the globe, managing peak volumes and working with our customers and regulators to ensure the efficient settlement of trades and effective operation of global capital markets. Whats in it for you: The current objective is to identify individuals with 16+ years of experience who have high expertise, to join their existing team of experts who are spread across the world. This is your opportunity to start at the beginning and get the advantages of rapid early growth. This role is based out in Gurgaon and expected to work with different teams and colleagues across the globe. This is an excellent opportunity to be part of a team based out of Gurgaon and to work with colleagues across multiple regions globally. Responsibilities: The role shall be responsible for establishing, maintaining, socialising and realising the target state integration strategy for FX & Securities Post trade businesses of Osttra. This shall encompass the post trade lifecycle of our businesses including connectivity with clients, markets ecosystem and Osttras post trade family of networks and platforms and products. The role shall partner with product architects, product managers, delivery heads and teams for refactoring the deliveries towards the target state. They shall be responsible for the efficiency, optimisation, oversight and troubleshooting of current day integration solutions, platforms and deliveries as well, in addition target state focus. The role shall be expected to produce and maintain integration architecture blueprint. This shall cover current state and propose a rationalised view of target state of end-to-end integration flows and patterns. The role shall also provide for and enable the needed technology platforms/tools and engineering methods to realise the strategy. The role enable standardisation of protocols / formats (at least within Osttra world) , tools and reduce the duplication & non differentiated heavy lift in systems. The role shall enable the documentation of flows & capture of standard message models. Integration strategy shall also include transformation strategy which is so vital in a multi-lateral / party / system post trade world. Role shall partner with other architects and strategies / programmes and enable the demands of UI, application, and data strategies. What Were Looking For Rich domain experience of financial services industry preferably with financial markets, Pre/post trade life cycles and large-scale Buy/Sell/Brokerage organisations Should have experience of leading the integration strategies and delivering the integration design and architecture for complex programmes and financial enterprises catering to key variances of latency / throughput. Experience with API Management platforms (like AWS API Gateway, Apigee, Kong, MuleSoft Anypoint) and key management concepts (API lifecycle management, versioning strategies, developer portals, rate limiting, policy enforcement) Should be adept with integration & transformation methods, technologies and tools. Should have experience of domain modelling for messages / events / streams and APIs. Rich experience of architectural patterns like Event driven architectures, micro services, event streaming, Message processing/orchestrations, CQRS, Event sourcing etc. Experience of protocols or integration technologies like FIX, Swift, MQ, FTP, API etc. . including knowledge of authentication patterns (OAuth, mTLS, JWT, API Keys), authorization mechanisms, data encryption (in transit and at rest), secrets management, and security best practices Experience of messaging formats and paradigms like XSD, XML, XSLT, JSON, Protobuf, REST, gRPC, GraphQL etc Experience of technology like Kafka or AWS Kinesis, Spark streams, Kubernetes / EKS, AWS EMR Experience of languages like Java, python and message orchestration frameworks like Apache Camel, Apache Nifi, AWS Step Functions etc. Experience in designing and implementing traceability/observability strategies for integration systems and familiarity with relevant framework tooling. Experience of engineering methods like CI/CD, build deploy automation, Infra as code and integration testing methods and tools Should have appetite to review / code for complex problems and should find interests / energy in doing design discussions and reviews. Experience and strong understanding of multicloud integration patterns. The LocationGurgaon, India About Company Statement: About OSTTRA Candidates should note that OSTTRAis an independentfirm, jointly owned by S&P Global and CME Group. As part of the joint venture, S&P Global providesrecruitmentservices to OSTTRA - however, successful candidates will be interviewed and directly employed by OSTTRA, joiningour global team of more than 1,200 posttrade experts. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ yearsMarkitServ, Traiana, TriOptima and Reset. OSTTRA is a joint venture, owned 50/50 by S&P Global and CME Group. Joining the OSTTRA team is a unique opportunity to help build a bold new business with an outstanding heritage in financial technology, playing a central role in supporting global financial markets.Learn more atwww.osttra.com. Whats In It For You Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf -----------------------------------------------------------
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Oracle Cloud is a comprehensive enterprise-gradeplatform that offers best-in-class services across Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). Oracle Cloud platform offers choice and flexibility for customers to build, deploy, integrate, and extend applications in the cloud that enable adapting to rapidly changing business requirements, promote interoperability and avoid lock-in. This platform supports numerous open standards (SQL, HTML5, REST, and more), open-source solutions (such as Kubernetes, Hadoop, Spark and Kafka) and a wide variety of programming languages, databases, tools and integration frameworks. You're Opportunity: Values are our foundation and how we deliver excellence. We iterate and improve based on the data and customer feedback. We are constantly learning and taking opportunities to grow our careers and ourselves. We challenge each other to stretch beyond our past to build our future. You are the builder here. Your versatility will be your greatest asset as you turn your hand to development, design and execution. You'll have the opportunity to collaborate with the brightest minds in the industry and bring fresh insight to everything you do. Solve fascinating, high scale problems and enjoy extraordinary career growth at a company that wants to see you thrive. Your Role: We're looking for a Senior Software Engineer with the DevSecOps mindset to enable our users to run large scale data processing jobs faster and cheaper in the cloud. You will have an opportunity to solve resiliency and scalability problems in the distributed systems and data processing platforms. What you developed will be used by customers around the world to process the data at petabyte scale with thousands of nodes.You will have the opertunity to play a pivotal role in advancing GenAI Agent Observability, a core focus area for the organization's AI development and success. We provide lots of training. We share, help and learn from each other. We are passionate and motivated to grow ourselves and your career. Responsibilities Include: Work autonomously, but also collaborate with team members, to lead the design and implementation of key parts of the service. Write efficient, understandable, debuggable and testable codes to handle large scale transactions. Understand the OCI ecosystem and the broader Oracle ecosystem on the Cloud, data processing, management and retrieval aspects. Understand the large scale batch and streaming job process. Apply AI/ML techniques to design and implement systems for observability, focusing on data collection, processing, and analytics. Continuously improve with the performance, reliability and scalability of compute / storage / networking infrastructure resources. Stay informed of new technologies and propose enhancements. Troubleshooting: have a deep understanding of our services and dependencies in order to respond quickly and efficiently to major incidents and minimize service disruptions when they occur. Manage environment setup and service deployment across development and production environments. Offer exceptional customer support, showcasing proficiency in Oracle Data and AI technologies and cloud computing services. Qualifications: A Bachelor's or Master's degree in Computer Science or a related field is required. Minimum 5 years of software development experience, with a strong foundation in object-oriented programming (Java, Python) or Scala. Prior experience in building large scale data analytics systems in the cloud, such as Spark, Hadoop, Presto, AWS EMR, GCP Dataproc, AWS Glue/Athena, Flink, etc. Solid understanding of and experience with Linux, Containers, Docker and Kubernetes Understanding of cloud computing fundamentals, networking, and monitoring tools. Exposure to LLM frameworks and language models is advantageous. Background in AI/ML application development, including model training and deployment is a plus. Strong motivation to work with the users to build a product delighting customers. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Proficient in using project management and collaboration tools (Jira, Confluence, BitBucket). Familiarity with Agile and Scrum development methodologies. Awareness of cloud security and operational practices in web application delivery. Key Skills: Multitasking - ability to work on multiple tasks at once Problem-solving skills - use problem-solving skills to isolate and solve problems with programs to keep progress on track Strong communication and analytical skills. Excellent problem solving and analytical skills. Handles hard problems with a positive can do attitude. Team player and able to work with others all skill level. Career Level - IC3 As a member of the software engineering division, you will assist in defining and developing software for tasks associated with the developing, debugging or designing of software applications or operating systems. Provide technical leadership to other software developers. Specify, design and implement modest changes to existing software architecture to meet changing needs. Career Level - IC3
Posted 2 weeks ago
8.0 - 12.0 years
6 - 10 Lacs
pune
Work from Office
We are seeking a dynamic and experienced Tech Lead with a strong foundation in Java and Apache Spark to join our team. In this role, you will lead the development and deployment of scalable cloud-based data solutions, leveraging your expertise in AWS and big data technologies. Key Responsibilities: Lead the design, development, and deployment of scalable and reliable data processing solutions on AWS using Java and Spark. Architect and implement big data processing pipelines using Apache Spark on AWS EMR. Develop and deploy Serverless applications using AWS Lambda, integrating with other AWS services. Utilize Amazon EKS for container orchestration and microservices management. Design and implement workflow orchestration using Apache Airflow for complex data pipelines. Collaborate with cross-functional teams to define project requirements and ensure seamless integration of services. Mentor and guide team members in Java development best practices, cloud architecture, and data engineering. Monitor and optimize performance and cost of deployed solutions across AWS infrastructure. Stay current with emerging technologies and industry trends to drive innovation and maintain a competitive edge. Required Skills: Strong hands-on experience in Java development. Proficiency in Apache Spark for distributed data processing. Experience with AWS services including EMR, Lambda, EKS, and Airflow. Solid understanding of Serverless architecture and microservices. Proven leadership and mentoring capabilities. Excellent problem-solving and communication skills.
Posted 2 weeks ago
3.0 - 5.0 years
12 - 16 Lacs
kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 3-5+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 2 weeks ago
5.0 - 8.0 years
12 - 16 Lacs
kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5-8+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 6+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 5 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 2 weeks ago
4.0 - 9.0 years
12 - 16 Lacs
kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies. Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
pune, maharashtra, india
Remote
At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities, collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow. Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Your Role IT experience with a minimum of 5+ years of experience in creating data warehouses, data lakes, ETL/ELT, data pipelines on cloud. Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. preferably - Life Sciences Domain Experience with cloud storage, cloud database, cloud Data ware housing and Data Lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Experience with cloud storage, cloud database, cloud Data ware housing and Data Lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Your Profile Able to contribute in making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. Knowledge on IOT and real time streaming would be added advantage. Lead architectural/technical discussions with client. Excellent communication and presentation skills. What you will love about working here . We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. . At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. . Equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.
Posted 2 weeks ago
1.0 - 3.0 years
0 Lacs
bengaluru, karnataka, india
Remote
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build what's next for their businesses. Your Role Should have developed/Worked for atleast 1 Gen AI project. Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledge of cloud identity management, authentication and authorization. Proficiency in using cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Your Profile Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Able to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev-ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 2 weeks ago
2.0 - 6.0 years
12 - 16 Lacs
bengaluru
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Re- write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 3 weeks ago
4.0 - 9.0 years
12 - 16 Lacs
kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies. Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 3 weeks ago
3.0 - 5.0 years
12 - 16 Lacs
kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 3-5+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 3 weeks ago
0.0 years
0 Lacs
bengaluru, karnataka, india
On-site
ROLE PROFILE: Are you ready to take your engineering career to the next level Join us as a Mid Lead Engineer and contribute to building state-of-the-art data platforms in AWS , leveraging Python and Spark . Be part of a dynamic team, driving innovation and scalability for data solutions in a supportive and hybrid work environment. ROLE SUMMARY: This role is ideal for an experienced data engineer looking to step into a leadership position while remaining hands-on with cutting-edge technologies. You will design, implement and optimize ETL workflows using Python and Spark, contributing to our robust data lakehouse architecture on AWS. Success in this role requires technical expertise, strong problem-solving skills and the ability to collaborate effectively within an agile team. WHAT YOU&aposLL BE DOING: Design, build, and maintain robust, scalable and efficient ETL pipelines using Python and Spark. Develop workflows leveraging AWS services such as EMR Serverless, Glue, Glue Data Catalog, Lambda and S3. Implement data quality frameworks and governance practices to ensure reliable data processing. Collaborate with cross-functional teams to gather requirements, provide technical insights and deliver high-quality solutions. Optimize existing workflows and drive migration to a modern data lakehouse architecture, integrating Apache Iceberg. Enforce coding standards, design patterns, and system architecture best practices. Monitor system performance and ensure data reliability through proactive optimizations. Contribute to technical discussions, mentor junior team members and foster a culture of learning and innovation. WHAT YOU&aposLL BRING: Essential Skills Expertise in Python and Spark, with proven experience in designing and implementing data workflows. Hands-on experience with AWS EMR Serverless, Glue, Glue Data Catalog, Lambda, S3, and EMR. Strong understanding of data quality frameworks, governance practices and scalable architectures. Practical knowledge of Apache Iceberg within data lakehouse solutions. Problem-solving skills and experience optimizing workflows for performance and cost-efficiency. Agile methodology experience, including sprint planning and retrospectives. Excellent communication skills for articulating technical solutions to diverse stakeholders. Desirable Skills Familiarity with additional programming languages such as Java. Experience with serverless computing paradigms. Knowledge of visualization or reporting tools. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership , Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyones race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what its used for, and how its obtained, your rights and how to contact us as a data subject. If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice. Show more Show less
Posted 3 weeks ago
10.0 - 20.0 years
50 - 65 Lacs
hyderabad
Hybrid
Key Responsibilities: Team Leadership & Technical Mentorship: Lead, manage, and mentor a team of data engineers, fostering a culture of technical excellence and collaborative problem-solving. Conduct code reviews, provide architectural guidance. Hands-On Technical Contribution: Architecting and implementing critical components of the data platform. Expertise with PySpark, Scala, and SQL will be essential. Architectural Ownership: Design and own the roadmap for the data ecosystem. Lead the architecture of scalable and resilient data solutions, including Data Lakehouse , real-time streaming services using Kafka , and large-scale batch processing jobs using Apache Spark . Platform Modernization: Drive the evolution of the data stack. Cloud native service experience ( AWS preferred: S3, EMR, Glue, Redshift ) and containerization ( Docker, Kubernetes ) experience is must. Cross-Functional Partnership: Act as the primary technical liaison for data. Collaborate closely with Product Managers, Data Scientists, and Analysts to translate business requirements into robust, scalable data models and pipelines. Operational Rigor: Implement and enforce best practices for data quality, governance, monitoring, and CI/CD for data infrastructure. Ensure the data platform is reliable, performant and secure. Hard Skills & Qualifications: Experience Requirement: A minimum of 10 years of professional experience in software and data engineering. At least 3 years of direct people management experience, leading a team of engineers. Technical Requirements: Expert-level proficiency in Apache Spark , with hands-on coding ability in PySpark and/or Scala . Strong command of SQL and extensive experience with data modeling and data warehousing solutions (e.g., Redshift, Snowflake, BigQuery). Proven experience building and maintaining production-grade data pipelines and distributed systems. Deep experience with AWS data services (S3, EMR, Glue, Kinesis, Redshift). Preferred: Experience building or managing a Data Lakehouse architecture. Hands-on experience with stream-processing frameworks like Kafka Streams or Spark Streaming. Familiarity with workflow orchestration tools like Airflow . Knowledge of containerization and orchestration with Docker and Kubernetes . Bachelor's or Master's degree in Computer Science or a related engineering field.
Posted 3 weeks ago
3.0 - 5.0 years
12 - 16 Lacs
kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 3-5+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers.
Posted 3 weeks ago
4.0 - 9.0 years
12 - 16 Lacs
kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies. Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 3 weeks ago
5.0 - 8.0 years
12 - 16 Lacs
kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5-8+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 6+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 5 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 3 weeks ago
3.0 - 6.0 years
8 - 10 Lacs
hyderabad
Work from Office
The Team: RatingsXpress is at the heart of financial workflows when it comes to providing and analyzing data. We provide Ratings and Research information to clients . Our work deals with content ingestion, data feeds generation as well as exposing the data to clients via API calls. This position in part of the Ratings Xpresss team and is focused on providing clients the critical data they need to make the most informed investment decisions possible. As a member of the Xpressfeed Team in S&P Global Market Intelligence, you will work with a group of intelligent and visionary engineers to build impactful content management tools for investment professionals across the globe. Our Software Engineers are involved in the full product life cycle, from design through release. You will be expected to participate in application designs , write high-quality code and innovate on how to improve the overall system performance and customer experience. If you are a talented developer and want to help drive the next phase for Data Management Solutions at S&P Global and can contribute great ideas, solutions and code and understand the value of Cloud solutions, we would like to talk to you. Whats in it for you: We are currently seeking a Software Developer with a passion for full-stack development. In this role, you will have the opportunity to work on cutting-edge cloud technologies such as Databricks , Snowflake , and AWS , while also engaging in Scala and SQL Server -based database development. This position offers a unique opportunity to grow both as a Full Stack Developer and as a Cloud Engineer , expanding your expertise across modern data platforms and backend development. Responsibilities: Analyze, design and develop solutions within a multi-functional Agile team to support key business needs for the Data feeds Design, implement and test solutions using AWS EMR for content Ingestion. Work on complex SQL server projects involving high volume data Engineer components, and common services based on standard corporate development models, languages and tools Apply software engineering best practices while also leveraging automation across all elements of solution delivery Collaborate effectively with technical and non-technical stakeholders. Must be able to document and demonstrate technical solutions by developing documentation, diagrams, code comments, etc. Basic Qualifications: Bachelors degree in Computer Science, Information Technology, Engineering, or a related field. 3-6 years of experience in application development. Minimum of 5 years of hands-on experience with Scala. Minimum of 5 years of hands-on experience with Microsoft SQL Server. Solid understanding of Amazon Web Services (AWS) and cloud-based development. In-depth knowledge of system architecture, object-oriented programming, and design patterns. Excellent communication skills, with the ability to convey complex ideas clearly both verbally and in writing. Preferred Qualifications: Familiarity with AWS Services, EMR, Auto scaling, EKS Working knowledge of snowflake. Preferred experience in Python development. Familiarity with the Financial Services domain and Capital Markets is a plus. Experience developing systems that handle large volumes of data and require high computational performance.
Posted 4 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
At PwC, the focus in data and analytics engineering is on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. You play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will concentrate on designing and building data infrastructure and systems to enable efficient data processing and analysis. Your responsibilities include developing and implementing data pipelines, data integration, and data transformation solutions. As an AWS Architect / Manager at PwC - AC, you will interact with Offshore Manager/Onsite Business Analyst to understand the requirements and will be responsible for end-to-end implementation of Cloud data engineering solutions like Enterprise Data Lake and Data hub in AWS. Strong experience in AWS cloud technology is required, along with planning and organization skills. You will work as a cloud Architect/lead on an agile team and provide automated cloud solutions, monitoring the systems routinely to ensure that all business goals are met as per the Business requirements. **Position Requirements:** **Must Have:** - Experience in architecting and delivering highly scalable, distributed, cloud-based enterprise data solutions - Strong expertise in the end-to-end implementation of Cloud data engineering solutions like Enterprise Data Lake, Data hub in AWS - Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, ETL data Pipelines, Big Data model techniques using Python / Java - Design scalable data architectures with Snowflake, integrating cloud technologies (AWS, Azure, GCP) and ETL/ELT tools such as DBT - Guide teams in proper data modeling (star, snowflake schemas), transformation, security, and performance optimization - Experience in load from disparate data sets and translating complex functional and technical requirements into detailed design - Deploying Snowflake features such as data sharing, events, and lake-house patterns - Experience with data security and data access controls and design - Understanding of relational as well as NoSQL data stores, methods, and approaches (star and snowflake, dimensional modeling) - Good knowledge of AWS, Azure, or GCP data storage and management technologies such as S3, Blob/ADLS, and Google Cloud Storage - Proficient in Lambda and Kappa Architectures - Strong AWS hands-on expertise with a programming background preferably Python/Scala - Knowledge of Big Data frameworks and related technologies with experience in Hadoop and Spark - Strong experience in AWS compute services like AWS EMR, Glue, and Sagemaker and storage services like S3, Redshift & Dynamodb - Experience with AWS Streaming Services like AWS Kinesis, AWS SQS, and AWS MSK - Troubleshooting and Performance tuning experience in Spark framework - Spark core, Sql, and Spark Streaming - Experience in flow tools like Airflow, Nifi, or Luigi - Knowledge of Application DevOps tools (Git, CI/CD Frameworks) - Experience in Jenkins or Gitlab with rich experience in source code management like Code Pipeline, Code Build, and Code Commit - Experience with AWS CloudWatch, AWS Cloud Trail, AWS Account Config, AWS Config Rules - Understanding of Cloud data migration processes, methods, and project lifecycle - Business/domain knowledge in Financial Services/Healthcare/Consumer Market/Industrial Products/Telecommunication, Media and Technology/Deal advisory along with technical expertise - Experience in leading technical teams, guiding and mentoring team members - Analytical & problem-solving skills - Communication and presentation skills - Understanding of Data Modeling and Data Architecture **Desired Knowledge/Skills:** - Experience in building stream-processing systems using solutions such as Storm or Spark-Streaming - Experience in Big Data ML toolkits like Mahout, SparkML, or H2O - Knowledge in Python - Certification on AWS Architecture desirable - Worked in Offshore/Onsite Engagements - Experience in AWS services like STEP & Lambda - Project Management skills with consulting experience in Complex Program Delivery **Professional And Educational Background:** BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA **Minimum Years Experience Required:** Candidates with 8-12 years of hands-on experience **Additional Application Instructions:** Add here and change text color to black or remove bullet and section title if not applicable.,
Posted 4 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
As an AWS Developer at PwC's Advisory Acceleration Center, you will collaborate with the Offshore Manager and Onsite Business Analyst to comprehend requirements and take charge of implementing Cloud data engineering solutions on AWS, such as Enterprise Data Lake and Data hub. With a focus on architecting and delivering scalable cloud-based enterprise data solutions, you will bring your expertise in end-to-end implementation of Cloud data engineering solutions using tools like Snowflake utilities, SnowSQL, SnowPipe, ETL data Pipelines, and Big Data model techniques using Python/Java. Your responsibilities will include loading disparate data sets, translating complex requirements into detailed designs, and deploying Snowflake features like data sharing, events, and lake-house patterns. You are expected to possess a deep understanding of relational and NoSQL data stores, including star and snowflake dimensional modeling, and demonstrate strong hands-on expertise in AWS services such as EMR, Glue, Sagemaker, S3, Redshift, Dynamodb, and AWS Streaming Services like Kinesis, SQS, and MSK. Troubleshooting and performance tuning experience in Spark framework, familiarity with flow tools like Airflow, Nifi, or Luigi, and proficiency in Application DevOps tools like Git, CI/CD frameworks, Jenkins, and Gitlab are essential for this role. Desired skills include experience in building stream-processing systems using solutions like Storm or Spark-Streaming, knowledge in Big Data ML toolkits such as Mahout, SparkML, or H2O, proficiency in Python, and exposure to Offshore/Onsite Engagements and AWS services like STEP & Lambda. Candidates with 2-4 years of hands-on experience in Cloud data engineering solutions, a professional background in BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA, and a passion for problem-solving and effective communication are encouraged to apply to be part of PwC's dynamic and inclusive work culture, where learning, growth, and excellence are at the core of our values. Join us at PwC, where you can make a difference today and shape the future tomorrow!,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Data Platform Engineering Lead at JPMorgan Chase within Asset and Wealth Management, you are an integral part of an agile team that works to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. As a core technical contributor, you are responsible for conducting critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. Lead the design, development, and implementation of scalable data pipelines and ETL batches using Python/PySpark on AWS. Execute standard software solutions, design, development, and technical troubleshooting. Use infrastructure as code to build applications to orchestrate and monitor data pipelines, create and manage on-demand compute resources on cloud programmatically, create frameworks to ingest and distribute data at scale. Manage and mentor a team of data engineers, providing guidance and support to ensure successful product delivery and support. Collaborate proactively with stakeholders, users, and technology teams to understand business/technical requirements and translate them into technical solutions. Optimize and maintain data infrastructure on cloud platform, ensuring scalability, reliability, and performance. Implement data governance and best practices to ensure data quality and compliance with organizational standards. Monitor and troubleshoot application and data pipelines, identifying and resolving issues in a timely manner. Stay up-to-date with emerging technologies and industry trends to drive innovation and continuous improvement. Add to team culture of diversity, equity, inclusion, and respect. Required qualifications, capabilities, and skills: Formal training or certification on software engineering concepts and 5+ years applied experience. Experience in software development and data engineering, with demonstrable hands-on experience in Python and PySpark. Proven experience with cloud platforms such as AWS, Azure, or Google Cloud. Good understanding of data modeling, data architecture, ETL processes, and data warehousing concepts. Experience or good knowledge of cloud native ETL platforms like Snowflake and/or Databricks. Experience with big data technologies and services like AWS EMRs, Redshift, Lambda, S3. Proven experience with efficient Cloud DevOps practices and CI/CD tools like Jenkins/Gitlab, for data engineering platforms. Good knowledge of SQL and NoSQL databases, including performance tuning and optimization. Experience with declarative infra provisioning tools like Terraform, Ansible, or CloudFormation. Strong analytical skills to troubleshoot issues and optimize data processes, working independently and collaboratively. Experience in leading and managing a team/pod of engineers, with a proven track record of successful project delivery. Preferred qualifications, capabilities, and skills: Knowledge of machine learning model lifecycle, language models and cloud-native MLOps pipelines and frameworks is a plus. Familiarity with data visualization tools and data integration patterns.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |