Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
10 - 20 Lacs
Hyderabad, Chennai
Work from Office
Roles & Responsibilities : • We are looking for a strong Senior Data Engineering who will be majorly responsible for designing, building and maintaining ETL/ ELT pipelines . • Integration of data from multiple sources or vendors to provide the holistic insights from data. • You are expected to build and manage Data Lake and Data warehouse solutions, design data models, create ETL processes, implementing data quality mechanisms etc. • Perform EDA (exploratory data analysis) required to troubleshoot data related issues and assist in the resolution of data issues. • Should have experience in client interaction oral and written. • Experience in mentoring juniors and providing required guidance to the team. Required Technical Skills • Extensive experience in languages such as Python, Pyspark, SQL (basics and advanced). • Strong experience in Data Warehouse, ETL, Data Modelling, building ETL Pipelines, Data Architecture . • Must be proficient in Redshift, Azure Data Factory, Snowflake etc. • Hands-on experience in cloud services like AWS S3, Glue, Lambda, CloudWatch, Athena etc. • Good to have knowledge in Dataiku, Big Data Technologies and basic knowledge of BI tools like Power BI, Tableau etc will be plus. • Sound knowledge in Data management, data operations, data quality and data governance. • Knowledge of SFDC, Waterfall/ Agile methodology. • Strong knowledge of Pharma domain / life sciences commercial data operations. Qualifications • Bachelors or masters Engineering/ MCA or equivalent degree. • 4-6 years of relevant industry experience as Data Engineer . • Experience working on Pharma syndicated data such as IQVIA, Veeva, Symphony; Claims, CRM, Sales, Open Data etc. • High motivation, good work ethic, maturity, self-organized and personal initiative. • Ability to work collaboratively and providing the support to the team. • Excellent written and verbal communication skills. • Strong analytical and problem-solving skills. Location • Preferably Hyderabad/ Chennai, India
Posted 1 month ago
3.0 - 8.0 years
5 - 10 Lacs
Bengaluru
Work from Office
The Core AI BI & Data Platforms Team has been established to create, operate and run the Enterprise AI, BI and Data that facilitate the time to market for reporting, analytics and data science teams to run experiments, train models and generate insights as well as evolve and run the CoCounsel application and its shared capability of CoCounsel AI Assistant. The Enterprise Data Platform aims to provide self service capabilities for fast and secure ingestion and consumption of data across TR. At Thomson Reuters, we are recruiting a team of motivated Cloud professionals to transform how we build, manage and leverage our data assets. The Data Platform team in Bangalore is seeking an experienced Software Engineer with a passion for engineering cloud-based data platform systems. Join our dynamic team as a Software Engineer and take a pivotal role in shaping the future of our Enterprise Data Platform. You will develop and implement data processing applications and frameworks on cloud-based infrastructure, ensuring the efficiency, scalability, and reliability of our systems. About the Role In this opportunity as the Software Engineer, you will: Develop data processing applications and frameworks on cloud-based infrastructure in partnership withData Analysts and Architects with guidance from Lead Software Engineer. Innovatewithnew approaches to meet data management requirements. Make recommendations about platform adoption, including technology integrations, application servers, libraries, and AWS frameworks, documentation, and usability by stakeholders. Contribute to improving the customer experience. Participate in code reviews to maintain a high-quality codebase Collaborate with cross-functional teams to define, design, and ship new features Work closely with product owners, designers, and other developers to understand requirements and deliver solutions. Effectively communicate and liaise across the data platform & management teams Stay updated on emerging trends and technologies in cloud computing About You You're a fit for the role of Software Engineer, if you meet all or most of these criteria: Bachelor's degree in Computer Science, Engineering, or a related field 3+ years of relevant experience in Implementation of data lake and data management of data technologies for large scale organizations. Experience in building & maintaining data pipelines with excellent run-time characteristics such as low-latency, fault-tolerance and high availability. Proficient in Python programming language. Experience in AWS services and management, including Serverless, Container, Queueing and Monitoring services like Lambda, ECS, API Gateway, RDS, Dynamo DB, Glue, S3, IAM, Step Functions, CloudWatch, SQS, SNS. Good knowledge in Consuming and building APIs Business Intelligence tools like PowerBI Fluency in querying languages such as SQL Solid understanding in Software development practicessuch as version control via Git, CI/CD and Release management Agile development cadence Good critical thinking, communication, documentation, troubleshooting and collaborative skills.
Posted 1 month ago
10.0 - 16.0 years
38 - 48 Lacs
Thiruvananthapuram
Work from Office
Senior Data Engineer (10+ yrs ) . Skilled in Python, PySpark, AWS (Glue, EMR, DynamoDB) RESTful APIs. Build scalable data pipelines, ensure data quality, and work in Agile teams. Location: Trivandrum/Kochi/Remote. Immediate joiners preferred. Provident fund
Posted 1 month ago
10.0 - 15.0 years
0 - 90 Lacs
Thiruvananthapuram / Trivandrum, Kerala, India
On-site
We are looking for someone who is ready to work on US overlapping hours (night till 10 pm), also who is an expert in AWS Services, almost a kind of Lead or Associate Architect Job Overview : We are seeking an experienced Senior Data Engineer to lead the development of a scalable data ingestion framework while ensuring high data quality and validation. The successful candidate will also be responsible for designing and implementing robust APIs for seamless data integration. This role is ideal for someone with deep expertise in building and managing big data pipelines using modern AWS-based technologies, and who is passionate about driving quality and efficiency in data processing systems. Key Responsibilities: Data Ingestion Framework: o Design & Development: Architect, develop, and maintain an end-to-end data ingestion framework that efficiently extracts, transforms, and loads data from diverse sources. o Framework Optimization: Use AWS services such as AWS Glue, Lambda, EMR, ECS , EC2 and Step Functions to build highly scalable, resilient, and automated data pipelines. Data Quality & Validation: o Validation Processes: Develop and implement automated data quality checks, validation routines, and error-handling mechanisms to ensure the accuracy and integrity of incoming data. o Monitoring & Reporting: Establish comprehensive monitoring, logging, and alerting systems to proactively identify and resolve data quality issues. API Development: o Design & Implementation: Architect and develop secure, high-performance APIs to enable seamless integration of data services with external applications and internal systems. o Documentation & Best Practices: Create thorough API documentation and establish standards for API security, versioning, and performance optimization. Collaboration & Agile Practices: o Cross-Functional Communication: Work closely with business stakeholders, data scientists, and operations teams to understand requirements and translate them into technical solutions. o Agile Development: Participate in sprint planning, code reviews, and agile ceremonies, while contributing to continuous improvement initiatives and CI/CD pipeline development (using tools like GitLab). Required Qualifications: Experience & Technical Skills: o Professional Background: At least 5 years of relevant experience in data engineering with a strong emphasis on analytical platform development. o Programming Skills: Proficiency in Python and/or PySpark, SQL for developing ETL processes and handling large-scale data manipulation. o AWS Expertise: Extensive experience using AWS services including AWS Glue, Lambda, Step Functions, and S3 to build and manage data ingestion frameworks. o Data Platforms: Familiarity with big data systems (e.g., AWS EMR, Apache Spark, Apache Iceberg) and databases like DynamoDB, Aurora, Postgres, or Redshift. o API Development: Proven experience in designing and implementing RESTful APIs and integrating them with external and internal systems. o CI/CD & Agile: Hands-on experience with CI/CD pipelines (preferably with GitLab) and Agile development methodologies. Soft Skills: o Strong problem-solving abilities and attention to detail. o Excellent communication and interpersonal skills with the ability to work independently and collaboratively. o Capacity to quickly learn and adapt to new technologies and evolving business requirements. Preferred Qualifications: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. Experience with additional AWS services such as Kinesis, Firehose, and SQS. Familiarity with data lakehouse architectures and modern data quality frameworks. Prior experience in a role that required proactive data quality management and API- driven integrations in complex, multi-cluster environments.
Posted 1 month ago
4.0 - 6.0 years
7 - 9 Lacs
Bengaluru
Hybrid
We are looking for an experienced *Software Engineer - Informatica* with 4 to 6 years of hands-on expertise* in designing, developing, and optimizing large-scale *ETL solutions* using *Informatica PowerCenter*. The ideal candidate will lead ETL projects, mentor junior developers, and ensure high-performance data integration across enterprise systems. About The Role In this role as Software Engineer, you will: - Analyze business and functional requirements to design and implement scalable data integration solutions - Understand and interpret High-Level Design (HLD) documents and convert them into detailed Low-Level Design (LLD) - Develop robust, reusable, and optimized Informatica mappings, sessions, and workflows - Apply mapping optimization and performance tuning techniques to ensure efficient ETL processes - Conduct peer code reviews and suggest improvements for reliability and performance - Prepare and execute comprehensive unit test cases and support system/integration testing - Maintain detailed technical documentation, including LLDs, data flow diagrams, and test cases - Build data pipelines and transformation logic in Snowflake, ensuring performance and scalability - Develop and manage Unix shell scripts for automation, scheduling, and monitoring of ETL jobs - Collaborate with cross-functional teams to support UAT, deployments, and production issues. About You You are a fit for this position if your background includes: - 4-6 years of strong hands-on experience with Informatica PowerCenter - Proficient in developing and optimizing ETL mappings, workflows, and sessions - Solid experience with performance tuning techniques and best practices in ETL processes - Hands-on experience with Snowflake for data loading, SQL transformations, and optimization - Strong skills in Unix/Linux scripting for job automation - Experience in converting HLDs into LLDs and defining unit test cases - Knowledge of data warehousing concepts, data modelling, and data quality frameworks Good to Have - Knowledge of Salesforce data model and integration (via Informatica or API-based solutions) - Exposure to AWS cloud services like S3, Glue, Redshift, Lambda, etc. - Familiarity with relational databases such as SQL Server and PostgreSQL - Experience with job schedulers like Control-M, ESP, or equivalent - Agile methodology experience and tools such as JIRA, Confluence, and Git - Knowledge of DBT (Data Build Tool) for data transformation and orchestration - Experience with Python scripting for data manipulation, automation, or integration tasks.
Posted 1 month ago
5.0 - 12.0 years
30 - 35 Lacs
Bengaluru
Work from Office
Key Skills : Python, Pyspark, AWS Glue, Redshift and Spark Steaming, Job Description: 6+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in PySpark for distributed data processing and transformation. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2.
Posted 1 month ago
3.0 - 8.0 years
5 - 10 Lacs
Bengaluru
Work from Office
The Core AI BI & Data Platforms Team has been established to create, operate and run the Enterprise AI, BI and Data that facilitate the time to market for reporting, analytics and data science teams to run experiments, train models and generate insights as well as evolve and run the CoCounsel application and its shared capability of CoCounsel AI Assistant. The Enterprise Data Platform aims to provide self service capabilities for fast and secure ingestion and consumption of data across TR. At Thomson Reuters, we are recruiting a team of motivated Cloud professionals to transform how we build, manage and leverage our data assets. The Data Platform team in Bangalore is seeking an experienced Software Engineer with a passion for engineering cloud-based data platform systems. Join our dynamic team as a Software Engineer and take a pivotal role in shaping the future of our Enterprise Data Platform. You will develop and implement data processing applications and frameworks on cloud-based infrastructure, ensuring the efficiency, scalability, and reliability of our systems. About the Role In this opportunity as the Software Engineer, you will: Develop data processing applications and frameworks on cloud-based infrastructure in partnership with Data Analysts and Architects with guidance from Lead Software Engineer. Innovatewithnew approaches to meet data management requirements. Make recommendations about platform adoption, including technology integrations, application servers, libraries, and AWS frameworks, documentation, and usability by stakeholders. Contribute to improving the customer experience. Participate in code reviews to maintain a high-quality codebase Collaborate with cross-functional teams to define, design, and ship new features Work closely with product owners, designers, and other developers to understand requirements and deliver solutions. Effectively communicate and liaise across the data platform & management teams Stay updated on emerging trends and technologies in cloud computing About You You're a fit for the role of Software Engineer, if you meet all or most of these criteria: Bachelor's degree in Computer Science, Engineering, or a related field 3+ years of relevant experience in Implementation of data lake and data management of data technologies for large scale organizations. Experience in building & maintaining data pipelines with excellent run-time characteristics such as low-latency, fault-tolerance and high availability. Proficient in Python programming language. Experience in AWS services and management, including Serverless, Container, Queueing and Monitoring services like Lambda, ECS, API Gateway, RDS, Dynamo DB, Glue, S3, IAM, Step Functions, CloudWatch, SQS, SNS. Good knowledge in Consuming and building APIs Business Intelligence tools like PowerBI Fluency in querying languages such as SQL Solid understanding in Software development practicessuch as version control via Git, CI/CD and Release management Agile development cadence Good critical thinking, communication, documentation, troubleshooting and collaborative skills.
Posted 1 month ago
10.0 - 20.0 years
0 - 40 Lacs
Thiruvananthapuram / Trivandrum, Kerala, India
On-site
We are looking for someone who is ready to work on US overlapping hours (night till 10 pm), also who is an expert in AWS Services, almost a kind of Lead or Associate Architect Job Overview We are seeking an experienced Senior Data Engineer to lead the development of a scalable data ingestion framework while ensuring high data quality and validation. The successful candidate will also be responsible for designing and implementing robust APIs for seamless data integration. This role is ideal for someone with deep expertise in building and managing big data pipelines using modern AWS-based technologies, and who is passionate about driving quality and efficiency in data processing systems. Key Responsibilities Data Ingestion Framework: o Design & Development: Architect, develop, and maintain an end-to-end data ingestion framework that efficiently extracts, transforms, and loads data from diverse sources. o Framework Optimization: Use AWS services such as AWS Glue, Lambda, EMR, ECS , EC2 and Step Functions to build highly scalable, resilient, and automated data pipelines. Data Quality & Validation: o Validation Processes: Develop and implement automated data quality checks, validation routines, and error-handling mechanisms to ensure the accuracy and integrity of incoming data. o Monitoring & Reporting: Establish comprehensive monitoring, logging, and alerting systems to proactively identify and resolve data quality issues. API Development: o Design & Implementation: Architect and develop secure, high-performance APIs to enable seamless integration of data services with external applications and internal systems. o Documentation & Best Practices: Create thorough API documentation and establish standards for API security, versioning, and performance optimization. Collaboration & Agile Practices: o Cross-Functional Communication: Work closely with business stakeholders, data scientists, and operations teams to understand requirements and translate them into technical solutions. o Agile Development: Participate in sprint planning, code reviews, and agile ceremonies, while contributing to continuous improvement initiatives and CI/CD pipeline development (using tools like GitLab). Required Qualifications Experience & Technical Skills: o Professional Background: At least 5 years of relevant experience in data engineering with a strong emphasis on analytical platform development. o Programming Skills: Proficiency in Python and/or PySpark, SQL for developing ETL processes and handling large-scale data manipulation. o AWS Expertise: Extensive experience using AWS services including AWS Glue, Lambda, Step Functions, and S3 to build and manage data ingestion frameworks. o Data Platforms: Familiarity with big data systems (e.g., AWS EMR, Apache Spark, Apache Iceberg) and databases like DynamoDB, Aurora, Postgres, or Redshift. o API Development: Proven experience in designing and implementing RESTful APIs and integrating them with external and internal systems. o CI/CD & Agile: Hands-on experience with CI/CD pipelines (preferably with GitLab) and Agile development methodologies. Soft Skills: o Strong problem-solving abilities and attention to detail. o Excellent communication and interpersonal skills with the ability to work independently and collaboratively. o Capacity to quickly learn and adapt to new technologies and evolving business requirements. Preferred Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. Experience with additional AWS services such as Kinesis, Firehose, and SQS. Familiarity with data lakehouse architectures and modern data quality frameworks. Prior experience in a role that required proactive data quality management and API- driven integrations in complex, multi-cluster environments.
Posted 1 month ago
5.0 - 8.0 years
1 - 2 Lacs
Kochi
Remote
Job Description: Job Title: AWS Cloud Engineer Location: Kochi, India (Remote Option Available) We are looking for a highly skilled AWS Cloud Engineer with a minimum of 5 years of hands-on experience in AWS cloud technologies. The ideal candidate will have strong expertise in AWS services such as S3, EC2, MSK, Glue, DMS, and SageMaker, along with solid development experience in Python and Docker. This role involves troubleshooting issues, reviewing solution designs, and coding high-quality implementations. Key Responsibilities: Work extensively with AWS services including S3, EC2, MSK, Glue, DMS, and SageMaker Develop, containerize, and deploy applications using Python and Docker Design and review system architecture and cloud-based solutions Troubleshoot and resolve issues in AWS infrastructure and application layers Collaborate with development and DevOps teams to build scalable and secure applications Preferred candidate profile Requirements: Minimum 5 years of hands-on experience in AWS Cloud Proficiency in Python and containerization using Docker Strong understanding of AWS data and streaming services Experience with AWS Glue, DMS, and SageMaker Ability to troubleshoot issues, analyze root causes, and implement effective solutions Strong communication and problem-solving skills Preferred Qualifications: AWS Certification (Associate or Professional level) is a plus
Posted 1 month ago
6.0 - 9.0 years
30 - 32 Lacs
Hyderabad, Coimbatore, Bengaluru
Work from Office
Job Summary: We are seeking a skilled Cloud Migration Consultant with hands-on experience in assessing and migrating complex applications from AWS to Azure. The ideal candidate will work closely with Microsoft business units, participating in application intake, assessment, and migration planning. This role includes creating migration artifacts, leading client interactions, and supporting application modernization initiatives on Azure with occasional AWS exposure. Key Responsibilities: Assess application readiness by documenting architecture, dependencies, and migration strategies. Conduct interviews with stakeholders and generate discovery insights using tools like Azure Migrate, CloudockIt, and PowerShell. Develop architecture diagrams, migration playbooks, and manage Azure DevOps boards. Set up and configure applications in on-premises and cloud environments, primarily Azure. Support proof-of-concepts (PoCs) and provide expert advice on migration and modernization options. Collaborate with application, database, and infrastructure teams to ensure smooth transition to migration factory teams. Track project progress, identify blockers and risks, and report timely status updates to leadership. Required Skills and Qualifications: Minimum 4 years of experience in cloud migration and application assessment. Strong expertise in Azure IaaS and PaaS services (e.g., VMs, App Services, Azure Data Factory). Familiarity with AWS IaaS and PaaS components (e.g., EC2, RDS, Glue, S3). Proficient in programming languages and frameworks including Java (Spring Boot), C#, .NET, Python, Angular, React.js, and REST APIs. Working knowledge of Kafka, Docker, Kubernetes, and Azure DevOps. Solid understanding of network infrastructure including VNets, NSGs, Firewalls, and WAFs. Experience with IAM concepts and technologies such as OAuth, SAML, Okta, and SiteMinder. Exposure to Big Data technologies like Databricks, Hadoop, Oracle, and DocumentDB. Preferred Qualifications: Azure or AWS cloud certifications. Prior experience with enterprise-scale cloud migration projects, especially within the Microsoft ecosystem. Excellent communication skills and proven ability to manage stakeholder relationships effectively. Location : Hyderabad/ Bangalore/ Coimbatore/ Pune
Posted 1 month ago
3.0 - 8.0 years
8 - 12 Lacs
Hyderabad
Work from Office
The Data Scientist organization within the Data and Analytics division is responsible for designing and implementing a unified data strategy that enables the efficient, secure, and governed use of data across the organization. We aim to create a trusted and customer-centric data ecosystem, built on a foundation of data quality, security, and openness, and guided by the Thomson Reuters Trust Principles. Our team is dedicated to developing innovative data solutions that drive business value while upholding the highest standards of data management and ethics. About the role: Work with low to minimum supervision to solve business problems using data and analytics. Work in multiple business domain areas including Customer Experience and Service, Operations, Finance, Sales and Marketing. Work with various business stakeholders, to understand and document requirements. Design an analytical framework to provide insights into a business problem. Explore and visualize multiple data sets to understand data available for problem solving. Build end to end data pipelines to handle and process data at scale. Build machine learning models and/or statistical solutions. Build predictive models. Use Natural Language Processing to extract insight from text. Design database models (if a data mart or operational data store is required to aggregate data for modeling). Design visualizations and build dashboards in Tableau and/or PowerBI Extract business insights from the data and models. Present results to stakeholders (and tell stories using data) using power point and/or dashboards. Work collaboratively with other team members. About you: Overall 3+ years' experience in technology roles. Must have a minimum of 1 years of experience working in the data science domain. Has used frameworks/libraries such as Scikit-learn, PyTorch, Keras, NLTK. Highly proficient in Python. Highly proficient in SQL. Experience with Tableau and/or PowerBI. Has worked with Amazon Web Services and Sagemaker. Ability to build data pipelines for data movement using tools such as Alteryx, GLUE, Informatica. Proficient in machine learning, statistical modelling, and data science techniques. Experience with one or more of the following types of business analytics applications: Predictive analytics for customer retention, cross sales and new customer acquisition. Pricing optimization models. Segmentation. Recommendation engines. Experience in one or more of the following business domains Customer Experience and Service. Finance. Operations. Good presentation skills and the ability to tell stories using data and PowerPoint/Dashboard Visualizations. Excellent organizational, analytical and problem-solving skills. Ability to communicate complex results in a simple and concise manner at all levels within the organization. Ability to excel in a fast-paced, startup-like environment. #LI-SS5 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 1 month ago
3.0 - 8.0 years
5 - 15 Lacs
Hyderabad
Work from Office
Dear Candidates, We are conducting a Walk-In Interview in Hyderabad for the position of Data Engineering on 20th/21st/22nd June 2025 . Position: Data Engineering Job description: Expert knowledge in AWS Data Lake implementation and support (S3, Glue,DMS Athena, Lambda, API Gateway, Redshift) Handling of data related activities such as data parsing, cleansing quality definition data pipelines, storage and ETL scripts Experiences in programming language Python/Pyspark/SQL Experience with data migration with hands-on experience Experiences in consuming rest API using various authentication options with in AWS Lambda architecture orchestrate triggers, debug and schedule batch job using a AWS Glue, Lambda and step functions understanding of AWS security features such as IAM roles and policies Exposure to Devops tools AWS certification in AWS is highly preferred Mandatory skills for Data engineer: Python/Pyspark, Aws Glue, lambda , redshift. Date: 20th June 2025 to 22nd June 2025 Time : 9.00 AM to 6.00 PM Eligibility: Any Graduate Experience : 2- 10 Years Gender: ANY Interested candidates can walk in directly. For any queries, please contact us at +91 7349369478/ 8555079906 Interview Venue Details: Selectify Analytics Address: Capital Park (Jain Sadguru Capital Park) Ayyappa Society, Silicon Valley, Madhapur, Hyderabad, Telangana 500081 Contact Person: Mr. Deepak/Saqeeb/Ravi Kumar Interview Time: 9.00 AM to 6.00 PM Contact Number : +91 7349369478/ 8555079906
Posted 1 month ago
7.0 - 10.0 years
8 - 16 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Mandatory Skills AWS, Python Detailed job description - Skill Set: Hands-on Experience with programming languages such as Python mandatorily. Thorough understanding of AWS from a data engineering and tools standpoint. Experience in another cloud is also beneficial. Experience in AWS Glue, Spark, and Python with Airflow for designing and developing data pipelines. Expertise in Informatica Cloud is advantageous Data Modeling: Advanced/Intermediate Data Modeling skills (Master/Ref/ODS/DW/DM) to enable Analytics on the Platform. Traditional data warehousing and ETL skillset, including strong SQL and PL/SQL skills. Experience with inbound and outbound integrations on the cloud platform Design and development of Data APIs (Python, Flask/FastAPI) to expose data on the platform Partner with SA to identify data inputs and related data sources, review sample data, identify gaps, and perform quality checks. Experience loading and querying cloud-hosted databases like Redshift, Snowflake, and BigQuery. Preferred - Knowledge of system-to-system integration, messaging/queuing, and managed file transfer. Preferred - Building and maintaining REST APIs, ensuring security and scalability Preferred - DevOps/DataOps: Experience with Infrastructure as Code, setting up CI/CD pipelines. Preferred - Building real-time streaming data ingestion
Posted 1 month ago
3.0 - 8.0 years
6 - 18 Lacs
Hyderabad
Work from Office
Mandatory skills for Data engineer: Python/Pyspark, Aws Glue, lambda , redshift. Python/Pyspark, Aws Glue, lambda , redshift, SQL. Expert knowledge in AWS Data Lake implementation and support (S3, Glue,DMS Athena, Lambda, API Gateway, Redshift)
Posted 1 month ago
2.0 - 4.0 years
7 - 9 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
POSITION Senior Data Engineer / Data Engineer LOCATION Bangalore/Mumbai/Kolkata/Gurugram/Hyd/Pune/Chennai EXPERIENCE 2+ Years JOB TITLE: Senior Data Engineer / Data Engineer OVERVIEW OF THE ROLE: As a Data Engineer or Senior Data Engineer, you will be hands-on in architecting, building, and optimizing robust, efficient, and secure data pipelines and platforms that power business-critical analytics and applications. You will play a central role in the implementation and automation of scalable batch and streaming data workflows using modern big data and cloud technologies. Working within cross-functional teams, you will deliver well-engineered, high-quality code and data models, and drive best practices for data reliability, lineage, quality, and security. HASHEDIN BY DELOITTE 2025 Mandatory Skills: Hands-on software coding or scripting for minimum 3 years Experience in product management for at-least 2 years Stakeholder management experience for at-least 3 years Experience in one amongst GCP, AWS or Azure cloud platform Key Responsibilities: Design, build, and optimize scalable data pipelines and ETL/ELT workflows using Spark (Scala/Python), SQL, and orchestration tools (e.g., Apache Airflow, Prefect, Luigi). Implement efficient solutions for high-volume, batch, real-time streaming, and event-driven data processing, leveraging best-in-class patterns and frameworks. Build and maintain data warehouse and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift) to support analytics, data science, and BI workloads. Develop, automate, and monitor Airflow DAGs/jobs on cloud or Kubernetes, following robust deployment and operational practices (CI/CD, containerization, infra-as-code). Write performant, production-grade SQL for complex data aggregation, transformation, and analytics tasks. Ensure data quality, consistency, and governance across the stack, implementing processes for validation, cleansing, anomaly detection, and reconciliation. Collaborate with Data Scientists, Analysts, and DevOps engineers to ingest, structure, and expose structured, semi-structured, and unstructured data for diverse use-cases. Contribute to data modeling, schema design, data partitioning strategies, and ensure adherence to best practices for performance and cost optimization. Implement, document, and extend data lineage, cataloging, and observability through tools such as AWS Glue, Azure Purview, Amundsen, or open-source technologies. Apply and enforce data security, privacy, and compliance requirements (e.g., access control, data masking, retention policies, GDPR/CCPA). Take ownership of end-to-end data pipeline lifecycle: design, development, code reviews, testing, deployment, operational monitoring, and maintenance/troubleshooting. Contribute to frameworks, reusable modules, and automation to improve development efficiency and maintainability of the codebase. Stay abreast of industry trends and emerging technologies, participating in code reviews, technical discussions, and peer mentoring as needed. Skills & Experience: Proficiency with Spark (Python or Scala), SQL, and data pipeline orchestration (Airflow, Prefect, Luigi, or similar). Experience with cloud data ecosystems (AWS, GCP, Azure) and cloud-native services for data processing (Glue, Dataflow, Dataproc, EMR, HDInsight, Synapse, etc.). © HASHEDIN BY DELOITTE 2025 Hands-on development skills in at least one programming language (Python, Scala, or Java preferred); solid knowledge of software engineering best practices (version control, testing, modularity). Deep understanding of batch and streaming architectures (Kafka, Kinesis, Pub/Sub, Flink, Structured Streaming, Spark Streaming). Expertise in data warehouse/lakehouse solutions (Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse) and storage formats (Parquet, ORC, Delta, Iceberg, Avro). Strong SQL development skills for ETL, analytics, and performance optimization. Familiarity with Kubernetes (K8s), containerization (Docker), and deploying data pipelines in distributed/cloud-native environments. Experience with data quality frameworks (Great Expectations, Deequ, or custom validation), monitoring/observability tools, and automated testing. Working knowledge of data modeling (star/snowflake, normalized, denormalized) and metadata/catalog management. Understanding of data security, privacy, and regulatory compliance (access management, PII masking, auditing, GDPR/CCPA/HIPAA). Familiarity with BI or visualization tools (PowerBI, Tableau, Looker, etc.) is an advantage but not core. Previous experience with data migrations, modernization, or refactoring legacy ETL processes to modern cloud architectures is a strong plus. Bonus: Exposure to open-source data tools (dbt, Delta Lake, Apache Iceberg, Amundsen, Great Expectations, etc.) and knowledge of DevOps/MLOps processes. Professional Attributes: Strong analytical and problem-solving skills; attention to detail and commitment to code quality and documentation. Ability to communicate technical designs and issues effectively with team members and stakeholders. Proven self-starter, fast learner, and collaborative team player who thrives in dynamic, fast-paced environments. Passion for mentoring, sharing knowledge, and raising the technical bar for data engineering practices. Desirable Experience: Contributions to open source data engineering/tools communities. Implementing data cataloging, stewardship, and data democratization initiatives. Hands-on work with DataOps/DevOps pipelines for code and data. Knowledge of ML pipeline integration (feature stores, model serving, lineage/monitoring integration) is beneficial. © HASHEDIN BY DELOITTE 2025 EDUCATIONAL QUALIFICATIONS: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience). Certifications in cloud platforms (AWS, GCP, Azure) and/or data engineering (AWS Data Analytics, GCP Data Engineer, Databricks). Experience working in an Agile environment with exposure to CI/CD, Git, Jira, Confluence, and code review processes. Prior work in highly regulated or large-scale enterprise data environments (finance, healthcare, or similar) is a plus.
Posted 1 month ago
6.0 - 10.0 years
6 - 10 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Roles and Responsibilities: Proficient in Python scripting and PySpark for data processing tasks Strong SQL capabilities with hands on experience managing big data using ETL tools like Informatica Experience with the AWS cloud platform and its data services including S3 Redshift Lambda EMR Airflow Postgres SNS and EventBridge Skilled in BASH Shell scripting Understanding of data lakehouse architecture particularly with Iceberg format is a plus Preferred Experience with Kafka and Mulesoft API Understanding of healthcare data systems is a plus Experience in Agile methodologies Strong analytical and problem-solving skills Effective communication and teamwork abilities Responsibilities Develop and maintain data pipelines and ETL processes to manage large scale datasets Collaborate to design test data architectures to align with business needs Implement and optimize data models for efficient querying and reporting Assist in the development and maintenance of data quality checks and monitoring processes Support the creation of data solutions that enable analytical capabilities
Posted 1 month ago
5.0 - 10.0 years
18 - 24 Lacs
Bangalore Rural
Work from Office
Responsibilities: Design, develop & maintain big data solutions using Spark, Scala & Apache tools. Optimize performance through data modeling & query optimization techniques. Annual bonus Provident fund
Posted 1 month ago
5.0 - 8.0 years
9 - 13 Lacs
Pune
Hybrid
Role & responsibilities Description: As a Senior Data Engineer, you manage and develop the solutions in close alignment with various business and Spoke stakeholders. You are responsible for the implementation of the IT governance guidelines. Collaborate with the Spokes Data Scientists, Data Analysts, and Business Analysts, when relevant. Tasks Create and manage data pipeline architecture for data ingestion, pipeline setup and data curation Experience working with and creating cloud data solutions Assemble large, complex data sets that meet functional/non-functional business requirements Implement the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Pyspark, SQL and AWS big data-technologies Build analytics tools that use the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics Manipulate data at scale: getting data in a ready-to-use state in close alignment with various business and Spoke stakeholders Preferred candidate profile Hard skills advanced knowledge: ETL Data Lake, Data Warehouse, RDS architectures knowledge Python, SQL (Any other OOP language is also valuable) Pyspark (preferably) or Spark Knowledge Object-oriented programming, Clean Code and good documentation skills AWS: S3, Athena, Lambda, Glue, IAM, SQS, EC2, Quicksight, and etc. Git Data Analysis & Visualization Optional: AWS CDK Cloud Development Kit CI/CD knowledge
Posted 1 month ago
4.0 - 9.0 years
6 - 11 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / Data Bricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 1 month ago
5.0 - 7.0 years
7 - 9 Lacs
Bengaluru
Work from Office
As a senior SAP Consultant, you will serve as a client-facing practitioner working collaboratively with clients to deliver high-quality solutions and be a trusted business advisor with deep understanding of SAP Accelerate delivery methodology or equivalent and associated work products. You will work on projects that assist clients in integrating strategy, process, technology, and information to enhance effectiveness, reduce costs, and improve profit and shareholder value. There are opportunities for you to acquire new skills, work across different disciplines, take on new challenges, and develop a comprehensive understanding of various industries. Your primary responsibilities include: Strategic SAP Solution FocusWorking across technical design, development, and implementation of SAP solutions for simplicity, amplification, and maintainability that meet client needs. Comprehensive Solution DeliveryInvolvement in strategy development and solution implementation, leveraging your knowledge of SAP and working with the latest technologies. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 1 month ago
5.0 - 7.0 years
7 - 9 Lacs
Bengaluru
Work from Office
As a senior SAP Consultant, you will serve as a client-facing practitioner working collaboratively with clients to deliver high-quality solutions and be a trusted business advisor with deep understanding of SAP Accelerate delivery methodology or equivalent and associated work products. You will work on projects that assist clients in integrating strategy, process, technology, and information to enhance effectiveness, reduce costs, and improve profit and shareholder value. There are opportunities for you to acquire new skills, work across different disciplines, take on new challenges, and develop a comprehensive understanding of various industries. Your primary responsibilities include: Strategic SAP Solution FocusWorking across technical design, development, and implementation of SAP solutions for simplicity, amplification, and maintainability that meet client needs. Comprehensive Solution DeliveryInvolvement in strategy development and solution implementation, leveraging your knowledge of SAP and working with the latest technologies. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Must have: 5+ years of experience in designing, developing, and deploying AI/ML solutions, with at least 3+ years focused on AWS AI/ML services. Deep hands-on experience with Amazon SageMaker for building, training, tuning, and deploying ML models. Proven ability to work with AWS data services, such as: Amazon S3 (data storage),AWS Glue or AWS Data Wrangler (data processing), Amazon Athena or Redshift (querying/analytics) Familiarity with AWS AI services, like:Amazon Rekognition (computer vision), Amazon Comprehend (NLP), Amazon Transcribe/Polly (speech), Amazon Lex (chatbots) Experience building end-to-end ML pipelines using AWS-native tools or integrating with tools like Step Functions, Lambda, and CloudWatch for automation and monitoring. Solid understanding of model versioning, deployment strategies (real-time, batch, A/B testing), and model monitoring on AWS. Proficiency in Python for ML model development and deployment. Good to have: Hands-on experience with MLOps practices using AWS tools (e.g., SageMaker Pipelines, Model Registry, CodePipeline, CloudFormation). Familiarity with data lake architecture and tools like AWS Lake Formation. AWS certifications (e.g., AWS Certified Machine Learning Specialty, Solutions Architect – Associate/Professional).Experience with Application performance tuning
Posted 1 month ago
7.0 - 12.0 years
40 - 45 Lacs
Bengaluru
Hybrid
Role & responsibilities Data engineer with architect level experience in ETL, AWS (Glue), Pyspark, Python etc Preferred candidate profile Immediate joiners who can work on Contract basis If you are interested please share your updated CV at pavan.teja@careernet.in
Posted 1 month ago
10.0 - 15.0 years
22 - 37 Lacs
Bengaluru
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As an AWS Data Engineer at Kyndryl, you will be responsible for designing, building, and maintaining scalable, secure, and high-performing data pipelines using AWS cloud-native services. This role requires extensive hands-on experience with both real-time and batch data processing, expertise in cloud-based ETL/ELT architectures, and a commitment to delivering clean, reliable, and well-modeled datasets. Key Responsibilities: Design and develop scalable, secure, and fault-tolerant data pipelines utilizing AWS services such as Glue, Lambda, Kinesis, S3, EMR, Step Functions, and Athena. Create and maintain ETL/ELT workflows to support both structured and unstructured data ingestion from various sources, including RDBMS, APIs, SFTP, and Streaming. Optimize data pipelines for performance, scalability, and cost-efficiency. Develop and manage data models, data lakes, and data warehouses on AWS platforms (e.g., Redshift, Lake Formation). Collaborate with DevOps teams to implement CI/CD and infrastructure as code (IaC) for data pipelines using CloudFormation or Terraform. Ensure data quality, validation, lineage, and governance through tools such as AWS Glue Data Catalog and AWS Lake Formation. Work in concert with data scientists, analysts, and application teams to deliver data-driven solutions. Monitor, troubleshoot, and resolve issues in production pipelines. Stay abreast of AWS advancements and recommend improvements where applicable. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Bachelor’s or master’s degree in computer science, Engineering, or a related field Over 8 years of experience in data engineering More than 3 years of experience with the AWS data ecosystem Strong experience with Pyspark, SQL, and Python Proficiency in AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, CloudWatch, Athena, Step Functions Familiarity with data modelling concepts, dimensional models, and data lake architectures Experience with CI/CD, GitHub Actions, CloudFormation/Terraform Understanding of data governance, privacy, and security best practices Strong problem-solving and communication skills Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization. Experience with AWS Lake Formation and Data Catalog for metadata management. Knowledge of Databricks, Snowflake, or BigQuery for data analytics. AWS Certified Data Engineer or AWS Certified Solutions Architect is a plus. Strong problem-solving and analytical thinking. Excellent communication and collaboration abilities. Ability to work independently and in agile teams. A proactive approach to identifying and addressing challenges in data workflows. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 month ago
12.0 - 15.0 years
35 - 60 Lacs
Chennai
Work from Office
AWS Solution Architect: Experience in driving the Enterprise Architecture for large commercial customers Experience in healthcare enterprise transformation Prior experience in architecting cloud first applications Experience leading a customer through a migration journey and proposing competing views to drive a mutual solution. Knowledge of cloud architecture concepts Knowledge of application deployment and data migration Ability to design high availability applications on AWS across availability zones and availability regions Ability to design applications on AWS taking advantage of disaster recovery design guidelines Design, implement, and maintain streaming solutions using AWS Managed Streaming for Apache Kafka (MSK) Monitor and manage Kafka clusters to ensure optimal performance, scalability, and uptime. Configure and fine-tune MSK clusters, including partitioning strategies, replication, and retention policies. Analyze and optimize the performance of Kafka clusters and streaming pipelines to meet high-throughput and low-latency requirements. Design and implement data integration solutions to stream data between various sources and targets using MSK. Lead data transformation and enrichment processes to ensure data quality and consistency in streaming applications Mandatory Technical Skillset: AWS Architectural concepts - designs, implements, and manages cloud infrastructure AWS Services (EC2, S3, VPC, Lambda, ELB, Route 53, Glue, RDS, DynamoDB, Postgres, Aurora, API Gateway, CloudFormation, etc.) Kafka Amazon MSK Domain Experience: Healthcare domain exp. is required Blues exp. is preferred Location – Pan India
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough