Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 7.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Are you excited by the prospect of wrangling data, helping develop information systems/sources/tools, and shaping the way businesses make decisionsThe Go-To-Markets Data Analytics team is looking for a skilled Senior Data Engineer who is motivated to deliver top notch data-engineering solutions to support business intelligence, data science, and self-service data solutions. About the Role: In this role as a Senior Data Engineer, you will Design, develop, optimize, and automate data pipelines that blend and transform data across different sources to help drive business intelligence, data science, and self-service data solutions. Work closely with data scientists and data visualization teams to understand data requirements to ensure the availability of high-quality data for analytics, modelling, and reporting. Build pipelines that source, transform, and load data thats both structured and unstructured keeping in mind data security and access controls. Explore large volumes of data with curiosity and conviction. Contribute to the strategy and architecture of data management systems and solutions. Proactively troubleshoot and resolve data-related and performance bottlenecks in a timely manner. Be open to learning and working on emerging technologies in the data engineering, data science and cloud computing space. Have the curiosity to interrogate data, conduct independent research, utilize various techniques, and tackle ambiguous problems. Shift Timings12 PM to 9 PM (IST) Work from office for 2 days in a week (Mandatory) About You Youre a fit for the role of Senior Data Engineer, if your background includes Must have at least 6-7 years of total work experience with at least 3+ years in data engineering or analytics domains. Graduates in data analytics, data science, computer science, software engineering or other data centric disciplines. SQL Proficiency a must. Experience with data pipeline and transformation tools such as dbt, Glue, FiveTran, Alteryx or similar solutions. Experience using cloud-based data warehouse solutions such as Snowflake, Redshift, Azure. Experience with orchestration tools like Airflow or Dagster. Preferred experience using Amazon Web Services (S3, Glue, Athena, Quick sight). Data modelling knowledge of various schemas like snowflake and star. Has built data pipelines and other custom automated solutions to speed the ingestion, analysis, and visualization of large volumes of data. Knowledge building ETL workflows, database design, and query optimization. Has experience of a scripting language like Python. Works well within a team and collaborates with colleagues across domains and geographies. Excellent oral, written, and visual communication skills. Has a demonstrable ability to assimilate new information thoroughly and quickly. Strong logical and scientific approach to problem-solving. Can articulate complex results in a simple and concise manner to all levels within the organization. #LI-GS2 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 2 weeks ago
5.0 - 10.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Senior Machine Learning Engineer - Recommender Systems Join our team at Thomson Reuters and contribute to the global knowledge economy. Our innovative technology influences global markets and supports professionals worldwide in making pivotal decisions. Collaborate with some of the brightest minds on diverse projects to craft next-generation solutions that have a significant impact. As a leader in providing intelligent information, we value the unique perspectives that foster the advancement of our business and your professional journey. Are you excited about the opportunity to leverage your extensive technical expertise to guide a development team through the complexities of full life cycle implementation at a top-tier companyOur Commercial Engineering team is eager to welcome a skilled Senior Machine Learning Engineer to our established global engineering group. We're looking for someone enthusiastic, an independent thinker, who excels in a collaborative environment across various disciplines, and is at ease interacting with a diverse range of individuals and technological stacks. This is your chance to make a lasting impact by transforming customer interactions as we develop the next generation of an enterprise-wide experience. About the Role: As a Senior Machine Learning Engineer, you will: Spearhead the development and technical implementation of machine learning solutions, including configuration and integration, to fulfill business, product, and recommender system objectives. Create machine learning solutions that are scalable, dependable, and secure. Craft and sustain technical outputs such as design documentation and representative models. Contribute to the establishment of machine learning best practices, technical standards, model designs, and quality control, including code reviews. Provide expert oversight, guidance on implementation, and solutions for technical challenges. Collaborate with an array of stakeholders, cross-functional and product teams, business units, technical specialists, and architects to grasp the project scope, requirements, solutions, data, and services. Promote a team-focused culture that values information sharing and diverse viewpoints. Cultivate an environment of continual enhancement, learning, innovation, and deployment. About You: You are an excellent candidate for the role of Senior Machine Learning Engineer if you possess: At least 5 years of experience in addressing practical machine learning challenges, particularly with Recommender Systems, to enhance user efficiency, reliability, and consistency. A profound comprehension of data processing, machine learning infrastructure, and DevOps/MLOps practices. A minimum of 2 years of experience with cloud technologies (AWS is preferred), including services, networking, and security principles. Direct experience in machine learning and orchestration, developing intricate multi-tenant machine learning products. Proficient Python programming skills, SQL, and data modeling expertise, with DBT considered a plus. Familiarity with Spark, Airflow, PyTorch, Scikit-learn, Pandas, Keras, and other relevant ML libraries. Experience in leading and supporting engineering teams. Robust background in crafting data science and machine learning solutions. A creative, resourceful, and effective problem-solving approach. #LI-FZ1 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Data Engineer Location: Hyderabad Experience: 5+Years Job Summary: We are looking for a skilled and experienced Data Engineer with over 5 years of experience in data engineering and data migration projects. The ideal candidate should possess strong expertise in SQL, Python, data modeling, data warehousing, and ETL pipeline development. Experience with big data tools like Hadoop and Spark, along with AWS services such as Redshift, S3, Glue, EMR, and Lambda, is essential. This role provides an excellent opportunity to work on large-scale data solutions, enabling data-driven decision-making and operational excellence. Key Responsibilities: • Design, build, and maintain scalable data pipelines and ETL processes. • Develop and optimize data models and data warehouse architectures. • Implement and manage big data technologies and cloud-based data solutions. • Perform data migration, data transformation, and integration from multiple sources. • Collaborate with data scientists, analysts, and business teams to understand data needs and deliver solutions. • Ensure data quality, consistency, and security across all data pipelines and storage systems. • Optimize performance and manage cost-efficient AWS cloud resources. Basic Qualifications: • Master's degree in Computer Science, Engineering, Analytics, Mathematics, Statistics, IT, or equivalent. • 5+ years of experience in Data Engineering and data migration projects. • Proficient in SQL and Python for data processing and analysis. • Strong experience in data modeling, data warehousing, and building data pipelines. • Hands-on experience with big data technologies like Hadoop, Spark, etc. • Expertise in AWS services including Redshift, S3, AWS Glue, EMR, Kinesis, Firehose, Lambda, and IAM. • Understanding of ETL development best practices and principles. Preferred Qualifications: • Knowledge of data security and data privacy best practices. • Experience with DevOps and CI/CD practices related to data workflows. • Familiarity with data lake architectures and real-time data streaming. • Strong problem-solving abilities and attention to detail. • Excellent verbal and written communication skills. • Ability to work independently and in a team-oriented environment. Good to Have: • Experience with orchestration tools like Airflow or Step Functions. • Exposure to BI/Visualization tools like QuickSight, Tableau, or Power BI. • Understanding of data governance and compliance standards. Why Join Us? People Tech Group has significantly grown over the past two decades, focusing on enterprise applications and IT services. We are headquartered in Bellevue, Washington, with a presence across the USA, Canada, and India. We are also expanding to the EU, ME, and APAC regions. With a strong pipeline of projects and satisfied customers, People Tech has been recognized as a Gold Certified Partner for Microsoft and Oracle. Benefits: L1 Visa opportunities to the USA after 1 year of a proven track record. Competitive wages with private healthcare cover. Incentives for certifications and educational assistance for relevant courses. Support for family with maternity leave. Complimentary daily lunch and participation in employee resource groups. For more details, please visit People Tech Group.
Posted 2 weeks ago
6.0 - 10.0 years
1 - 1 Lacs
Chennai
Hybrid
Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place one that benefits lives, communities and the planet Job Title: Specialty Development Practitioner Location: Chennai Work Type: Hybrid Position Description: At the client's Credit Company, we are modernizing our enterprise data warehouse in Google Cloud to enhance data, analytics, and AI/ML capabilities, improve customer experience, ensure regulatory compliance, and boost operational efficiencies. As a GCP Data Engineer, you will integrate data from various sources into novel data products. You will build upon existing analytical data, including merging historical data from legacy platforms with data ingested from new platforms. You will also analyze and manipulate large datasets, activating data assets to enable enterprise platforms and analytics within GCP. You will design and implement the transformation and modernization on GCP, creating scalable data pipelines that land data from source applications, integrate into subject areas, and build data marts and products for analytics solutions. You will also conduct deep-dive analysis of Current State Receivables and Originations data in our data warehouse, performing impact analysis related to the client's Credit North America's modernization and providing implementation solutions. Moreover, you will partner closely with our AI, data science, and product teams, developing creative solutions that build the future for the client's Credit. Experience with large-scale solutions and operationalizing data warehouses, data lakes, and analytics platforms on Google Cloud Platform or other cloud environments is a must. We are looking for candidates with a broad set of analytical and technology skills across these areas and who can demonstrate an ability to design the right solutions with the appropriate combination of GCP and 3rd party technologies for deployment on Google Cloud Platform. Skills Required: Big Query,, Data Flow, DataForm, Data Fusion, Dataproc, Cloud Composer, AIRFLOW, Cloud SQL, Compute Engine, Google Cloud Platform - Biq Query Experience Required: GCP Data Engineer Certified Successfully designed and implemented data warehouses and ETL processes for over five years, delivering high-quality data solutions. 5+ years of complex SQL development experience 2+ experience with programming languages such as Python, Java, or Apache Beam. Experienced cloud engineer with 3+ years of GCP expertise, specializing in managing cloud infrastructure and applications into production-scale solutions. Big Query,, Data Flow, DataForm, Data Fusion, Dataproc, Cloud Composer, AIRFLOW, Cloud SQL, Compute Engine, Google Cloud Platform Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API, cloudbuild, App Engine, Apache Kafka, Pub/Sub, AI/ML, Kubernetes Experience Preferred: In-depth understanding of GCP's underlying architecture and hands-on experience of crucial GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, Big Query, Dataflow, Pub/Sub, Data form, astronomer, Data Fusion, DataProc, Pyspark, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Cloud build and App Engine, alongside and storage including Cloud Storage DevOps tools such as Tekton, GitHub, Terraform, Docker. Expert in designing, optimizing, and troubleshooting complex data pipelines. Experience developing with microservice architecture from container orchestration framework. Experience in designing pipelines and architectures for data processing. Passion and self-motivation to develop/experiment/implement state-of-the-art data engineering methods/techniques. Self-directed, work independently with minimal supervision, and adapts to ambiguous environments. Evidence of a proactive problem-solving mindset and willingness to take the initiative. Strong prioritization, collaboration & coordination skills, and ability to simplify and communicate complex ideas with cross-functional teams and all levels of management. Proven ability to juggle multiple responsibilities and competing demands while maintaining a high level of productivity. Data engineering or development experience gained in a regulated financial environment. Experience in coaching and mentoring Data Engineers Project management tools like Atlassian JIRA Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Experience with data security, governance, and compliance best practices in the cloud. Experience with AI solutions or platforms that support AI solutions Experience using data science concepts on production datasets to generate insights Experience Range: 5+ years Education Required: Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.
Posted 2 weeks ago
8.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Location: Delhi Experience: 5–8 Years Industry: Financial Services / Payments Job Summary We are looking for a skilled Data Modeler / Architect with 5–8 years of experience in designing, implementing, and optimizing robust data architectures in the financial payments industry. The ideal candidate will have deep expertise in SQL , data modeling, ETL/ELT pipeline development , and cloud-based data platforms such as Databricks or Snowflake . You will play a key role in designing scalable data models, orchestrating reliable data workflows, and ensuring the integrity and performance of mission-critical financial datasets. This is a highly collaborative role interfacing with engineering, analytics, product, and compliance teams. Key Responsibilities Design, implement, and maintain logical and physical data models to support transactional, analytical, and reporting systems. Develop and manage scalable ETL/ELT pipelines for processing large volumes of financial transaction data . Tune and optimize SQL queries, stored procedures , and data transformations for maximum performance. Build and manage data orchestration workflows using tools like Airflow, Dagster, or Luigi . Architect data lakes and warehouses using platforms like Databricks, Snowflake, BigQuery , or Redshift . Enforce and uphold data governance, security, and compliance standards (e.g., PCI-DSS, GDPR). Collaborate closely with data engineers, analysts, and business stakeholders to understand data needs and deliver solutions. Conduct data profiling, validation , and quality assurance to ensure clean and consistent data. Maintain clear and comprehensive documentation for data models, pipelines, and architecture. Required Skills & Qualifications 5–8 years of experience as a Data Modeler, Data Architect , or Senior Data Engineer in the financial/payments domain. Advanced SQL expertise, including query tuning, indexing , and performance optimization . Proficiency in developing ETL/ELT workflows using tools such as Spark, dbt, Talend, or Informatica . Experience with data orchestration frameworks: Airflow, Dagster, Luigi , etc. Strong hands-on experience with cloud-based data platforms like Databricks, Snowflake , or equivalents. Deep understanding of data warehousing principles : star/snowflake schema, slowly changing dimensions, etc. Familiarity with financial data structures , such as payment transactions, reconciliation, fraud patterns, and audit trails. Working knowledge of cloud services (AWS, GCP, or Azure) and data security best practices . Strong analytical thinking and problem-solving capabilities in high-scale environments. Preferred Qualifications Experience with real-time data pipelines (e.g., Kafka, Spark Streaming). Exposure to data mesh or data fabric architecture paradigms. Certifications in Snowflake, Databricks , or relevant cloud platforms. Knowledge of Python or Scala for data engineering tasks.
Posted 2 weeks ago
7.0 - 12.0 years
27 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
We’re hiring Databricks Developers skilled in PySpark & SQL for cloud-based projects. Multiple positions are open based on experience level. Email: Anita.s@liveconnections.in *JOB AT HYDERABAD, MUMBAI, PUNE* Required Candidate profile Exciting walk-in drive on Aug 2 across Mumbai, Pune & Hyderabad. Shape the future with data 7–12 yrs total exp with 3–5 yrs in Databricks (Azure/AWS). Must know PySpark & SQL.
Posted 2 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary We are looking for a highly skilled Big Data & ETL Tester to join our data engineering and analytics team. The ideal candidate will have strong experience in PySpark, SQL, and Python, with a deep understanding of ETL pipelines, data validation, and cloud-based testing on AWS. Familiarity with data visualization tools like Apache Superset or Power BI is a strong plus You will work closely with our data engineering team to ensure data availability, consistency, and quality across complex data pipelines, and help transform business requirements into robust data testing frameworks. Key Responsibilities • Collaborate with big data engineers to validate data pipelines and ensure data integrity across ingestion, processing, and transformation stages. • Write complex PySpark and SQL queries to test and validate large-scale datasets. • Perform ETL testing, covering schema validation, data completeness, accuracy, transformation logic, and performance testing. • Conduct root cause analysis of data issues using structured debugging approaches. • Build automated test scripts in Python for regression, smoke, and end-to-end data testing. • Analyze large datasets to track KPIs and performance metrics supporting business operations and strategic decisions. • Work with data analysts and business teams to translate business needs into testable data validation frameworks. • Communicate testing results, insights, and data gaps via reports or dashboards (Superset/Power BI preferred). • Identify and document areas of improvement in data processes and advocate for automation opportunities. • Maintain detailed documentation of test plans, test cases, results, and associated dashboards. Required Skills and Qualifications 2+ years of experience in big data testing and ETL testing. • Strong hands-on skills in PySpark, SQL, and Python. • Solid experience working with cloud platforms, especially AWS (S3, EMR, Glue, Lambda, Athena, etc.). • Familiarity with data warehouse and lakehouse architectures. • Working knowledge of Apache Superset, Power BI, or similar visualization tools. • Ability to analyze large, complex datasets and provide actionable insights. • Strong understanding of data modeling concepts, data governance, and quality frameworks. • Experience with automation frameworks and CI/CD for data validation is a plus Preferred Qualifications • Experience with Airflow, dbt, or other data orchestration tools. • Familiarity with data cataloging tools (e.g., AWS Glue Data Catalog). • Prior experience in a product or SaaS-based company with high data volume environments. Why Join Us? • Opportunity to work with cutting-edge data stack in a fast-paced environment. • Collaborate with passionate data professionals driving real business impact. • Flexible work environment with a focus on learning and innovation
Posted 2 weeks ago
3.0 - 5.0 years
9 - 11 Lacs
Pune
Work from Office
Hiring Senior Data Engineer for an AI-native startup. Work on scalable data pipelines, LLM workflows, web scraping (Scrapy, lxml), Pandas, APIs, and Django. Strong in Python, data quality, mentoring, and large-scale systems. Health insurance
Posted 2 weeks ago
6.0 years
4 - 6 Lacs
Hyderābād
On-site
Senior Data Modernization Expert Overview We are building a high-impact Data Modernization Center of Excellence (COE) to help clients modernize their data platforms by migrating legacy data warehouses and ETL ecosystems to Snowflake . We are looking for an experienced and highly motivated Data Modernization Architect with deep expertise in Snowflake, Talend, and Informatica . This role is ideal for someone who thrives at the intersection of data engineering, architecture, and business strategy—and can translate legacy complexity into modern, scalable cloud-native solutions . Key Responsibilities Modernization & Migration Lead end-to-end migration of legacy data warehouses (e.g., Teradata, Netezza, Oracle, SQL Server) to Snowflake. Reverse-engineer complex ETL pipelines built in Talend or Informatica , documenting logic and rebuilding using modern frameworks (e.g., DBT, Snowflake Tasks, Streams, Snowpark). Build scalable ELT pipelines using Snowflake-native patterns , improving cost, performance, and maintainability. Design and validate data mapping, transformation logic , and ensure parity between source and target systems . Implement automation wherever possible (e.g., code converters, metadata extractors, migration playbooks). Architecture & Cloud Integration Architect modern data platforms leveraging Snowflake’s full capabilities : Snowpipe, Streams, Tasks, Materialized Views, Snowpark, and Cortex AI. Integrate with cloud platforms (AWS, Azure, GCP) and orchestrate data workflows with Airflow, Cloud Functions, or Snowflake Tasks . Implement secure, compliant architectures with proper use of RBAC, masking, Unity Catalog, SSO , and external integrations. Communication & Leadership Act as a trusted advisor to internal teams and client stakeholders. Present modernization plans, risks, and ROI to both executive and technical audiences . Collaborate with delivery teams, pre-sales teams, and cloud architects to accelerate migration initiatives . Mentor junior engineers and promote standardization, reuse, and COE asset development . Required Experience 6+ years in data engineering or BI/DW architecture. 3+ years of deep, hands-on Snowflake implementation experience. 2+ years of migration experience from Talend and/or Informatica to Snowflake. Strong command of SQL , data modeling , ELT pipeline design, and performance tuning. Practical knowledge of modern orchestration tools (e.g., Airflow , DBT Cloud , Snowflake Tasks ). Familiarity with legacy metadata parsing , parameterized job execution , and parallel processing logic in ETL tools. Good knowledge of cloud data security , data governance, and compliance standards. Strong written and verbal communication skills; capable of explaining technical concepts to CXOs or developers alike . Bonus / Preferred Snowflake certifications: SnowPro Advanced Architect , SnowPro Core . Experience building custom migration tools or accelerators . Hands-on with LLM-assisted code conversion tools . Experience in key verticals like retail, healthcare, or manufacturing . Why Join This Team? Opportunity to be part of a founding core team defining modernization standards. Exposure to cutting-edge Snowflake features and migration accelerators. High-impact role with visibility across sales, delivery, and leadership . Career acceleration through complex problem-solving and ownership .
Posted 2 weeks ago
3.0 - 5.0 years
7 - 11 Lacs
Hyderabad, Chennai
Work from Office
Incedo is Hiring Data Engineer-GCP : Immediate to 30 days Joiners Preferred! Are you passionate about GCP Data Engineers and looking for an exciting opportunity to work on cutting-edge projects? We're looking for a GCP Data Engineer to join our team in Chennai and Hyderabad! Skills Required: Experience: 3 to 5 years Experience with GCP , Python , Airflow , Pyspark. Location - Chennai/Hyderabad (WFO) If you are interested please drop your resume at anshika.arora@incedoinc.com Walkin Drive in Hyderabad on 2nd Aug , kindly email me for getting invite and more details.
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
```html About the Company: We are a forward-thinking organization dedicated to leveraging data to drive business success. Our mission is to empower teams with actionable insights and foster a culture of innovation and collaboration. About the Role: We are looking for a skilled and motivated Data Engineer with expertise in Snowflake and DBT (Data Build Tool) to join our growing data team. In this role, you will be responsible for building scalable and efficient data pipelines, optimizing data warehouse performance, and enabling data-driven decision-making across the organization. Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines using DBT and Snowflake. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and translate them into technical solutions. Optimize and manage Snowflake data warehouses, ensuring efficient storage, processing, and retrieval. Develop and enforce best practices for data modeling, transformation, and version control. Monitor and improve data pipeline reliability, performance, and data quality. Implement access controls, data governance, and documentation across the data stack. Perform code reviews and contribute to the overall architecture of the data platform. Stay up to date with industry trends and emerging technologies in the modern data stack. Qualifications: Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field. 5+ years of experience in data engineering or a related field. Strong expertise in Snowflake data warehouse architecture, features, and optimization. Hands-on experience with DBT for data transformation and modeling. Proficiency in SQL and experience with data pipeline orchestration tools (e.g., Airflow, Prefect). Familiarity with cloud platforms (AWS, GCP, or Azure), especially data services. Understanding of data warehousing concepts, dimensional modeling, and modern ELT practices. Experience with version control systems (e.g., Git) and CI/CD workflows. Required Skills: Expertise in Snowflake and DBT. Strong SQL skills. Experience with data pipeline orchestration tools. Familiarity with cloud platforms. Preferred Skills: Experience with data governance and documentation. Knowledge of modern data stack technologies. Pay range and compensation package: Competitive salary based on experience and qualifications. Equal Opportunity Statement: We are committed to creating a diverse and inclusive workplace. We encourage applications from all qualified individuals regardless of race, gender, age, sexual orientation, disability, or any other characteristic protected by law. ```
Posted 2 weeks ago
175.0 years
0 Lacs
Gurgaon
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? Expertise with handling large volumes of data coming from many different disparate systems Expertise with Core Java , multithreading , backend processing , transforming large data volumes Working knowledge of Apache Flink , Apache Airflow , Apache Beam, open source data processing platforms Working knowledge of cloud platforms like GCP. Working knowledge of databases and performance tuning for complex big data scenarios - Singlestore DB and In Memory Processing Cloud Deployments , CI/CD and Platform Resiliency Good experience with Mvel Excellent communication skills , collaboration mindset and ability to work through unknowns Work with key stakeholders to drive data solutions that align to strategic roadmaps, prioritized initiatives and strategic Technology directions. Own accountability for all quality aspects and metrics of product portfolio, including system performance, platform availability, operational efficiency, risk management, information security, data management and cost effectiveness. Minimum Qualifications: Bachelor’s degree in computer science, Computer Science Engineering, or related field is required. 3+ years of large-scale technology engineering and formal management in a complex environment and/or comparable experience. To be successful in this role you will need to be good in Java, Flink, SQ, KafkaL & GCP Successful engineering and deployment of enterprise-grade technology products in an Agile environment. Large scale software product engineering experience with contemporary tools and delivery methods (i.e. DevOps, CD/CI, Agile, etc.). 3+ years' experience in a hands-on engineering in Java and data/distributed eco-system. Ability to see the big picture with attention given to critical details. Preferred Qualifications: Knowledge on Kafka, Spark Finance domain knowledge We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 2 weeks ago
4.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Data Engineer (Python) As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We are currently seeking a seasoned Data Engineer with a good experience in Python to join our team of professionals. Key Responsibilities: Develop Data Lake tables leveraging AWS Glue and Spark for efficient data management. Implement data pipelines using Airflow, Kubernetes, and various AWS services Must Have Skills: Experience in deploying and managing data warehouses Advanced proficiency of at least 4 years in Python for data analysis and organization Solid understanding of AWS cloud services Proficient in using Apache Spark for large-scale data processing Skills and Qualifications Needed: Practical experience with Apache Airflow for workflow orchestration Demonstrated ability in designing, building, and optimizing ETL processes, data pipelines, and data architectures Flexible, self-motivated approach with strong commitment to problem resolution. Excellent written and oral communication skills, with the ability to deliver complex information in a clear and effective manner to a range of different audiences. Willingness to work globally and across different cultures, and to participate in all stages of the data solution delivery lifecycle, including pre-studies, design, development, testing, deployment, and support. Nice to have exposure to Apache Druid Familiarity with relational database systems, Desired Work Experience : A degree in computer science or a similar field What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 weeks ago
7.0 - 12.0 years
15 - 27 Lacs
Bengaluru
Remote
THIS IS A FULLY REMOTE JOB WITH 5 DAYS WORK WEEK. THIS IS A ONE YEAR CONTRACT JOB, LIKELY TO BE CONTINUED AFTER ONE YEAR. Required Qualifications Education: B.Tech /M.Tech in Computer Science, Data Engineering, or equivalent field. Experience: 7-10 years in data engineering, with 2+ years in an industrial/operations-heavy environment (manufacturing, energy, supply chain, etc.) Job Role Senior Data Engineer will be responsible for independently designing, developing, and deploying scalable data infrastructure to support analytics, optimization, and AI-driven use cases in a low-tech maturity environment . You will own the data architecture end-to-end , work closely with data scientists , full stack engineers , and operations teams , and be a driving force in creating a robust Industry 4.0-ready data backbone. Key Responsibilities 1. Data Architecture & Infrastructure Design and implement a scalable, secure, and future-ready data architecture from scratch. Lead the selection, configuration, and deployment of data lakes, warehouses (e.g., AWS Redshift, Azure Synapse), and ETL/ELT pipelines. Establish robust data ingestion pipelines from PLCs, DCS systems, SAP, Excel files, and third-party APIs. Ensure data quality, governance, lineage, and metadata management. 2. Data Engineering & Tooling Build and maintain modular, reusable ETL/ELT pipelines using Python, SQL, Apache Airflow, or equivalent. Set up real-time and batch processing capabilities using tools such as Kafka, Spark, or Azure Data Factory. Deploy and maintain scalable data storage solutions and optimize query performance. Tech Stack Strong hands-on expertise in: Python, SQL, Spark, Pandas ETL tools: Airflow, Azure Data Factory, or equivalent Cloud platforms: Azure (preferred), AWS or GCP Databases: PostgreSQL, MS SQL Server, NoSQL (MongoDB, etc.) Data lakes/warehouses: S3, Delta Lake, Snowflake, Redshift, BigQuery Monitoring and Logging: Prometheus, Grafana, ELK, etc.
Posted 2 weeks ago
6.0 - 10.0 years
7 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Data Engineering, AirFlow, Fivetran, CI/CD using We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, our vision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineering groups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet business needs. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requirements. Added bonus if you also have: A good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data Location-remote,Delhi NCR,Bengaluru,Chennai,Pune,Kolkata,Ahmedabad,Mumbai, Hyderabad
Posted 2 weeks ago
6.0 - 10.0 years
7 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Data Engineering, AirFlow, Fivetran, CI/CD using We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, ourvision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineering groups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet business needs. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requirements. Added bonus if you also have: A good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data Location- Remote,Delhi NCR,Bengaluru,Chennai,Pune,Kolkata,Ahmedabad,Mumbai, Hyderabad
Posted 2 weeks ago
6.0 - 10.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Join us as a Data Engineer, PySpark, AWS Were looking for someone to build effortless, digital first customer experiences to help simplify our organisation and keep our data safe and secure Day-to-day, youll develop innovative, data-driven solutions through data pipelines, modelling and ETL design while inspiring to be commercially successful through insights If youre ready for a new challenge, and want to bring a competitive edge to your career profile by delivering streaming data ingestions, this could be the role for you We're offering this role at associate vice president level What youll do Your daily responsibilities will include you developing a comprehensive knowledge of our data structures and metrics, advocating for change when needed for product development Youll also provide transformation solutions and carry out complex data extractions, Well expect you to develop a clear understanding of data platform cost levels to build cost-effective and strategic solutions Youll also source new data by using the most appropriate tooling before integrating it into the overall solution to deliver it to our customers, Youll Also Be Responsible For Driving customer value by understanding complex business problems and requirements to correctly apply the most appropriate and reusable tools to build data solutions Participating in the data engineering community to deliver opportunities to support our strategic direction Carrying out complex data engineering tasks to build a scalable data architecture and the transformation of data to make it usable to analysts and data scientists Building advanced automation of data engineering pipelines through the removal of manual stages Leading on the planning and design of complex products and providing guidance to colleagues and the wider team when required The skills youll need To be successful in this role, youll have an understanding of data usage and dependencies with wider teams and the end customer Youll also have experience of extracting value and features from large scale data, You'll need at least eight years of experience working with Python, PySpark and SQL You'll also need experience in AWS architecture using EMR, EC2, S3, Lambda and Glue You'll also need experience in Apache Airflow, Anaconda and Sagemaker, Youll Also Need Experience of using programming languages alongside knowledge of data and software engineering fundamentals Experience with Performance optimization and tuning Good knowledge of modern code development practices Great communication skills with the ability to proactively engage with a range of stakeholders Show
Posted 2 weeks ago
6.0 - 10.0 years
8 - 12 Lacs
Gurugram
Work from Office
Join us as a Data Engineer, PySpark, AWS Were looking for someone to build effortless, digital first customer experiences to help simplify our organisation and keep our data safe and secure Day-to-day, youll develop innovative, data-driven solutions through data pipelines, modelling and ETL design while inspiring to be commercially successful through insights If youre ready for a new challenge, and want to bring a competitive edge to your career profile by delivering streaming data ingestions, this could be the role for you We're offering this role at associate vice president level What youll do Your daily responsibilities will include you developing a comprehensive knowledge of our data structures and metrics, advocating for change when needed for product development Youll also provide transformation solutions and carry out complex data extractions, Well expect you to develop a clear understanding of data platform cost levels to build cost-effective and strategic solutions Youll also source new data by using the most appropriate tooling before integrating it into the overall solution to deliver it to our customers, Youll Also Be Responsible For Driving customer value by understanding complex business problems and requirements to correctly apply the most appropriate and reusable tools to build data solutions Participating in the data engineering community to deliver opportunities to support our strategic direction Carrying out complex data engineering tasks to build a scalable data architecture and the transformation of data to make it usable to analysts and data scientists Building advanced automation of data engineering pipelines through the removal of manual stages Leading on the planning and design of complex products and providing guidance to colleagues and the wider team when required The skills youll need To be successful in this role, youll have an understanding of data usage and dependencies with wider teams and the end customer Youll also have experience of extracting value and features from large scale data, You'll need at least eight years of experience working with Python, PySpark and SQL You'll also need experience in AWS architecture using EMR, EC2, S3, Lambda and Glue You'll also need experience in Apache Airflow, Anaconda and Sagemaker, Youll Also Need Experience of using programming languages alongside knowledge of data and software engineering fundamentals Experience with Performance optimization and tuning Good knowledge of modern code development practices Great communication skills with the ability to proactively engage with a range of stakeholders Show
Posted 2 weeks ago
8.0 years
3 - 4 Lacs
Noida
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Develop comprehensive digital analytics solutions utilizing Adobe Analytics for web tracking, measurement, and insight generation Design, manage, and optimize interactive dashboards and reports using Power BI to support business decision-making Lead the design, development, and maintenance of robust ETL/ELT pipelines integrating diverse data sources Architect scalable data solutions leveraging Python for automation, scripting, and engineering tasks Oversee workflow orchestration using Apache Airflow to ensure timely and reliable data processing Provide leadership and develop robust forecasting models to support sales and marketing strategies Develop advanced SQL queries for data extraction, manipulation, analysis, and database management Implement best practices in data modeling and transformation using Snowflake and DBT; exposure to Cosmos DB is a plus Ensure code quality through version control best practices using GitHub Collaborate with cross-functional teams to understand business requirements and translate them into actionable analytics solutions Stay updated with the latest trends in digital analytics; familiarity or hands-on experience with Adobe Experience Platform (AEP) / Customer Journey Analytics (CJO) is highly desirable Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Master’s or Bachelor’s degree in Computer Science, Information Systems, Engineering, Mathematics, Statistics, Business Analytics, or a related field 8+ years of progressive experience in digital analytics, data analytics or business intelligence roles Experience with data modeling and transformation using tools such as DBT and Snowflake; familiarity with Cosmos DB is a plus Experience developing forecasting models and conducting predictive analytics to drive business strategy Advanced proficiency in web and digital analytics platforms (Adobe Analytics) Proficiency in ETL/ELT pipeline development and workflow orchestration (Apache Airflow) Skilled in creating interactive dashboards and reports using Power BI or similar BI tools Deep understanding of digital marketing metrics, KPIs, attribution models, and customer journey analysis Industry certifications relevant to digital analytics or cloud data platforms Ability to deliver clear digital reporting and actionable insights to stakeholders at all organizational levels At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #NJP
Posted 2 weeks ago
5.0 - 6.0 years
0 Lacs
Andhra Pradesh
On-site
Title: Developer (AWS Engineer) Requirements: Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Strong hands-on experience Proficient in Node.js and Python Seasoned developers capable of independently driving development tasks Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Good to have Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS services such as Redshift, RDS, S3, Glue ETL, Glue Data Catalog, EMR, PySpark, Python, Lake formation, Airflow, SQL scripts, etc Good to have Experience with building data analytical platform using Databricks (data pipelines), Starburst (semantic layer) on AWS cloud environment Experience with orchestration of workflows in an enterprise environment. Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Working experience with Agile Methodology Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 2 weeks ago
5.0 - 6.0 years
0 Lacs
Andhra Pradesh
On-site
Title: Developer (AWS Engineer) Requirements: Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Strong hands-on experience Proficient in Node.js and Python Seasoned developers capable of independently driving development tasks Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Good to have Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS services such as Redshift, RDS, S3, Glue ETL, Glue Data Catalog, EMR, PySpark, Python, Lake formation, Airflow, SQL scripts, etc Good to have Experience with building data analytical platform using Databricks (data pipelines), Starburst (semantic layer) on AWS cloud environment Experience with orchestration of workflows in an enterprise environment. Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Working experience with Agile Methodology Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 2 weeks ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position - Technical Architect Location - Pune Experience - 6+ Years ABOUT HASHEDIN We are software engineers who solve business problems with a Product Mindset for leading global organizations. By combining engineering talent with business insight, we build software and products that can create new enterprise value. The secret to our success is a fast-paced learning environment, an extreme ownership spirit, and a fun culture. WHY SHOULD YOU JOIN US? With the agility of a start-up and the opportunities of an enterprise, every day at HashedIn, your work will make an impact that matters. So, if you are a problem solver looking to thrive in a dynamic fun culture of inclusion, collaboration, and high performance – HashedIn is the place to be! From learning to leadership, this is your chance to take your software engineering career to the next level. JOB TITLE - Technical Architect B.E/B.Tech, MCA, M.E/M.Tech graduate with 6 -10 Years of experience (This includes 4 years of experience as an application architect or data architect) • Java/Python/UI/DE • GCP/AWS/AZURE • Generative AI-enabled application design pattern knowledge is a value addition. • Excellent technical background with a breadth of knowledge across analytics, cloud architecture, distributed applications, integration, API design, etc • Experience in technology stack selection and the definition of solution, technology, and integration architectures for small to mid-sized applications and cloud-hosted platforms. • Strong understanding of various design and architecture patterns. • Strong experience in developing scalable architecture. • Experience implementing and governing software engineering processes, practices, tools, and standards for development teams. • Proficient in effort estimation techniques; will actively support project managers and scrum masters in planning the implementation and will work with test leads on the definition of an appropriate test strategy for the realization of a quality solution. • Extensive experience as a technology/ engineering subject matter expert i. e. high level • Solution definition, sizing, and RFI/RFP responses. • Aware of the latest technology trends, engineering processes, practices, and metrics. • Architecture experience with PAAS and SAAS platforms hosted on Azure AWS or GCP. • Infrastructure sizing and design experience for on-premise and cloud-hosted platforms. • Ability to understand the business domain & requirements and map them to technical solutions. • Outstanding interpersonal skills. Ability to connect and present to CXOs from client organizations. • Strong leadership, business communication consulting, and presentation skills. • Positive, service-oriented personality OVERVIEW OF THE ROLE: This role serves as a paradigm for the application of team software development processes and deployment procedures. Additionally, the incumbent actively contributes to the establishment of best practices and methodologies within the team. Craft & deploy resilient APIs, bridging cloud infrastructure & software development with seamless API design, development, & deployment • Works at the intersection of infrastructure and software engineering by designing and deploying data and pipeline management frameworks built on top of open-source components, including Hadoop, Hive, Spark, HBase, Kafka streaming, Tableau, Airflow, and other cloud-based data engineering services like S3, Redshift, Athena, Kinesis, etc. • Collaborate with various teams to build and maintain the most innovative, reliable, secure, and cost-effective distributed solutions. • Design and develop big data and real-time analytics and streaming solutions using industry-standard technologies. • Deliver the most complex and valuable components of an application on time as per the specifications. • Plays the role of a Team Lead, manages, or influences a large portion of an account or small project in its entirety, demonstrating an understanding of and consistently incorporating practical value with theoretical knowledge to make balanced technical decisions
Posted 2 weeks ago
4.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Data Engineer (Python) As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We are currently seeking a seasoned Data Engineer with a good experience in Python to join our team of professionals. Key Responsibilities: Develop Data Lake tables leveraging AWS Glue and Spark for efficient data management. Implement data pipelines using Airflow, Kubernetes, and various AWS services Must Have Skills: Experience in deploying and managing data warehouses Advanced proficiency of at least 4 years in Python for data analysis and organization Solid understanding of AWS cloud services Proficient in using Apache Spark for large-scale data processing Skills and Qualifications Needed: Practical experience with Apache Airflow for workflow orchestration Demonstrated ability in designing, building, and optimizing ETL processes, data pipelines, and data architectures Flexible, self-motivated approach with strong commitment to problem resolution. Excellent written and oral communication skills, with the ability to deliver complex information in a clear and effective manner to a range of different audiences. Willingness to work globally and across different cultures, and to participate in all stages of the data solution delivery lifecycle, including pre-studies, design, development, testing, deployment, and support. Nice to have exposure to Apache Druid Familiarity with relational database systems, Desired Work Experience : A degree in computer science or a similar field What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 weeks ago
7.0 - 12.0 years
14 - 24 Lacs
Pune
Work from Office
vConstruct, a Pune-based Construction Technology company is seeking a Senior Data Engineer for its Data Science and Analytics team, a close-knit group of analysts and engineers supporting all data aspects of the business. You will be responsible for designing, developing, and maintaining our data infrastructure, ensuring data integrity, and supporting various data-driven projects. You will work closely with cross-functional teams to integrate, process, and manage data from various sources, enabling business insights and enhancing operational efficiency. Responsibilities Lead the end-to-end design and development of scalable, high-performance data pipelines and ETL/ELT frameworks aligned with modern data engineering best practices. Architect complex data integration workflows that bring together structured, semi-structured, and unstructured data from both cloud and on-premise sources. Build robust real-time, batch, and on-demand pipelines with built-in observabilitymonitoring, alerting, and automated error handling. Partner with analysts, data scientists, and business leaders to define and deliver reliable data models, quality frameworks, and SLAs that power key business insights. Ensure optimal pipeline performance and throughput, with clearly defined SLAs and proactive alerting for data delivery or quality issues. Collaborate with platform, DevOps, and architecture teams to build secure, reusable, and CI/CD-enabled data workflows that align with enterprise architecture standards. Establish and enforce the best practices in source control, code reviews, testing automation, and continuous delivery for all data engineering components. Lead root cause analysis (RCA) and preventive maintenance for critical data failures, ensuring minimal business impact and continuous service improvement. Guide the team in establishing standards for data modeling, transformation logic, and governance, ensuring long-term maintainability and scalability. Design and execute comprehensive testing strategiesunit, integration, and system testingensuring high data reliability and pipeline resilience. Monitor and fine-tune data pipeline and query performance, optimizing for reliability, scalability, and cost-efficiency. Create and maintain detailed technical documentation, including data architecture diagrams, process flows, and integration specifications for internal and external stakeholders. Facilitate and lead discussions with business and operational teams to understand data requirements, prioritize initiatives, and drive data strategy forward. Qualifications 7 to 10 years of hands-on experience in data engineering roles with a proven record of building scalable and secure data platforms. Over 5 years of experience in scripting languages such as Python for data processing, automation, and ETL development. 4+ years of experience with Snowflake, including performance tuning, security model design, and advanced SQL development. 5+ years of experience with data integration tools such as Azure Data Factory, Fivetran, or Matillion. 5+ years of experience in writing complex, highly optimized SQL queries on large datasets. Proven experience integrating and managing APIs, JSON, XML, and webhooks for data acquisition. Hands-on experience with cloud platforms (Azure/AWS) and orchestration tools like Apache Airflow or equivalent. Experience with CI/CD pipelines, automated testing, and code versioning tools (e.g., Git). Familiarity with dbt or similar transformation tools and best practices for modular transformation development. Exposure to data visualization tools like Power BI for supporting downstream analytics is a plus. Strong interpersonal and communication skills with the ability to lead discussions with technical and business stakeholders. Education Bachelor’s or Master’s degree in Computer Science/Information technology or related field. Equivalent academic and work experience can be considered. About vConstruct : vConstruct specializes in providing high quality Building Information Modeling and Construction Technology services geared towards construction projects. vConstruct is a wholly owned subsidiary of DPR Construction. For more information, please visit www.vconstruct.com About DPR Construction: DPR Construction is a national commercial general contractor and construction manager specializing in technically challenging and sustainable projects for the advanced technology, biopharmaceutical, corporate office, and higher education and healthcare markets. With the purpose of building great things, great teams, great buildings, great relationships—DPR is a truly great company. For more information, please visit www.dpr.com
Posted 2 weeks ago
5.0 - 10.0 years
25 - 35 Lacs
Gurugram
Hybrid
Job Title: Data Engineer Apache Spark, Scala, GCP & Azure Location: Gurugram (Hybrid 3 days/week in office) Experience: 5–10 Years Type: Full-time Apply: Share your resume with the details listed below to vijay.s@xebia.com Availability: Immediate joiners or max 2 weeks' notice period only About the Role Xebia is looking for a skilled Data Engineer to join our fast-paced team in Gurugram. You will work on building and optimizing scalable data pipelines, processing large datasets using Apache Spark and Scala , and deploying on cloud platforms like GCP and Azure . If you're passionate about clean architecture, high-quality data flow, and performance tuning, this is the opportunity for you. Key Responsibilities Design and develop robust ETL pipelines using Apache Spark Write clean and efficient data processing code in Scala Handle large-scale data movement, transformation, and storage Build solutions on Google Cloud Platform (GCP) and Microsoft Azure Collaborate with teams to define data strategies and ensure data quality Optimize jobs for performance and cost on distributed systems Document technical designs and ETL flows clearly for the team Must-Have Skills Apache Spark Scala ETL design & development Cloud platforms: GCP & Azure Strong understanding of Data Engineering best practices Solid communication and collaboration skills Good-to-Have Skills Apache tools (Kafka, Beam, Airflow, etc.) Knowledge of data lake and data warehouse concepts CI/CD for data pipelines Exposure to modern data monitoring and observability tools Why Xebia? At Xebia, you’ll be part of a forward-thinking, tech-savvy team working on high-impact, global data projects. We prioritize clean code, scalable solutions, and continuous learning. Join us to build real-time, cloud-native data platforms that power business intelligence across industries. To Apply Please share your updated resume and include the following details in your email to vijay.s@xebia.com : Full Name: Total Experience: Current CTC: Expected CTC: Current Location: Preferred Xebia Location: Gurugram Notice Period / Last Working Day (if serving): Primary Skills: LinkedIn Profile URL: Note: Only candidates who can join immediately or within 2 weeks will be considered. Build intelligent, scalable data solutions with Xebia – let’s shape the future of data together.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |