Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
27 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
We’re hiring Databricks Developers skilled in PySpark & SQL for cloud-based projects. Multiple positions are open based on experience level. Email: Anita.s@liveconnections.in *JOB AT HYDERABAD, MUMBAI, PUNE* Required Candidate profile Exciting walk-in drive on Aug 2 across Mumbai, Pune & Hyderabad. Shape the future with data 7–12 yrs total exp with 3–5 yrs in Databricks (Azure/AWS). Must know PySpark & SQL.
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary We are looking for a highly skilled Big Data & ETL Tester to join our data engineering and analytics team. The ideal candidate will have strong experience in PySpark, SQL, and Python, with a deep understanding of ETL pipelines, data validation, and cloud-based testing on AWS. Familiarity with data visualization tools like Apache Superset or Power BI is a strong plus You will work closely with our data engineering team to ensure data availability, consistency, and quality across complex data pipelines, and help transform business requirements into robust data testing frameworks. Key Responsibilities • Collaborate with big data engineers to validate data pipelines and ensure data integrity across ingestion, processing, and transformation stages. • Write complex PySpark and SQL queries to test and validate large-scale datasets. • Perform ETL testing, covering schema validation, data completeness, accuracy, transformation logic, and performance testing. • Conduct root cause analysis of data issues using structured debugging approaches. • Build automated test scripts in Python for regression, smoke, and end-to-end data testing. • Analyze large datasets to track KPIs and performance metrics supporting business operations and strategic decisions. • Work with data analysts and business teams to translate business needs into testable data validation frameworks. • Communicate testing results, insights, and data gaps via reports or dashboards (Superset/Power BI preferred). • Identify and document areas of improvement in data processes and advocate for automation opportunities. • Maintain detailed documentation of test plans, test cases, results, and associated dashboards. Required Skills and Qualifications 2+ years of experience in big data testing and ETL testing. • Strong hands-on skills in PySpark, SQL, and Python. • Solid experience working with cloud platforms, especially AWS (S3, EMR, Glue, Lambda, Athena, etc.). • Familiarity with data warehouse and lakehouse architectures. • Working knowledge of Apache Superset, Power BI, or similar visualization tools. • Ability to analyze large, complex datasets and provide actionable insights. • Strong understanding of data modeling concepts, data governance, and quality frameworks. • Experience with automation frameworks and CI/CD for data validation is a plus Preferred Qualifications • Experience with Airflow, dbt, or other data orchestration tools. • Familiarity with data cataloging tools (e.g., AWS Glue Data Catalog). • Prior experience in a product or SaaS-based company with high data volume environments. Why Join Us? • Opportunity to work with cutting-edge data stack in a fast-paced environment. • Collaborate with passionate data professionals driving real business impact. • Flexible work environment with a focus on learning and innovation
Posted 1 week ago
3.0 - 5.0 years
9 - 11 Lacs
Pune
Work from Office
Hiring Senior Data Engineer for an AI-native startup. Work on scalable data pipelines, LLM workflows, web scraping (Scrapy, lxml), Pandas, APIs, and Django. Strong in Python, data quality, mentoring, and large-scale systems. Health insurance
Posted 1 week ago
6.0 years
4 - 6 Lacs
Hyderābād
On-site
Senior Data Modernization Expert Overview We are building a high-impact Data Modernization Center of Excellence (COE) to help clients modernize their data platforms by migrating legacy data warehouses and ETL ecosystems to Snowflake . We are looking for an experienced and highly motivated Data Modernization Architect with deep expertise in Snowflake, Talend, and Informatica . This role is ideal for someone who thrives at the intersection of data engineering, architecture, and business strategy—and can translate legacy complexity into modern, scalable cloud-native solutions . Key Responsibilities Modernization & Migration Lead end-to-end migration of legacy data warehouses (e.g., Teradata, Netezza, Oracle, SQL Server) to Snowflake. Reverse-engineer complex ETL pipelines built in Talend or Informatica , documenting logic and rebuilding using modern frameworks (e.g., DBT, Snowflake Tasks, Streams, Snowpark). Build scalable ELT pipelines using Snowflake-native patterns , improving cost, performance, and maintainability. Design and validate data mapping, transformation logic , and ensure parity between source and target systems . Implement automation wherever possible (e.g., code converters, metadata extractors, migration playbooks). Architecture & Cloud Integration Architect modern data platforms leveraging Snowflake’s full capabilities : Snowpipe, Streams, Tasks, Materialized Views, Snowpark, and Cortex AI. Integrate with cloud platforms (AWS, Azure, GCP) and orchestrate data workflows with Airflow, Cloud Functions, or Snowflake Tasks . Implement secure, compliant architectures with proper use of RBAC, masking, Unity Catalog, SSO , and external integrations. Communication & Leadership Act as a trusted advisor to internal teams and client stakeholders. Present modernization plans, risks, and ROI to both executive and technical audiences . Collaborate with delivery teams, pre-sales teams, and cloud architects to accelerate migration initiatives . Mentor junior engineers and promote standardization, reuse, and COE asset development . Required Experience 6+ years in data engineering or BI/DW architecture. 3+ years of deep, hands-on Snowflake implementation experience. 2+ years of migration experience from Talend and/or Informatica to Snowflake. Strong command of SQL , data modeling , ELT pipeline design, and performance tuning. Practical knowledge of modern orchestration tools (e.g., Airflow , DBT Cloud , Snowflake Tasks ). Familiarity with legacy metadata parsing , parameterized job execution , and parallel processing logic in ETL tools. Good knowledge of cloud data security , data governance, and compliance standards. Strong written and verbal communication skills; capable of explaining technical concepts to CXOs or developers alike . Bonus / Preferred Snowflake certifications: SnowPro Advanced Architect , SnowPro Core . Experience building custom migration tools or accelerators . Hands-on with LLM-assisted code conversion tools . Experience in key verticals like retail, healthcare, or manufacturing . Why Join This Team? Opportunity to be part of a founding core team defining modernization standards. Exposure to cutting-edge Snowflake features and migration accelerators. High-impact role with visibility across sales, delivery, and leadership . Career acceleration through complex problem-solving and ownership .
Posted 1 week ago
3.0 - 5.0 years
7 - 11 Lacs
Hyderabad, Chennai
Work from Office
Incedo is Hiring Data Engineer-GCP : Immediate to 30 days Joiners Preferred! Are you passionate about GCP Data Engineers and looking for an exciting opportunity to work on cutting-edge projects? We're looking for a GCP Data Engineer to join our team in Chennai and Hyderabad! Skills Required: Experience: 3 to 5 years Experience with GCP , Python , Airflow , Pyspark. Location - Chennai/Hyderabad (WFO) If you are interested please drop your resume at anshika.arora@incedoinc.com Walkin Drive in Hyderabad on 2nd Aug , kindly email me for getting invite and more details.
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
```html About the Company: We are a forward-thinking organization dedicated to leveraging data to drive business success. Our mission is to empower teams with actionable insights and foster a culture of innovation and collaboration. About the Role: We are looking for a skilled and motivated Data Engineer with expertise in Snowflake and DBT (Data Build Tool) to join our growing data team. In this role, you will be responsible for building scalable and efficient data pipelines, optimizing data warehouse performance, and enabling data-driven decision-making across the organization. Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines using DBT and Snowflake. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and translate them into technical solutions. Optimize and manage Snowflake data warehouses, ensuring efficient storage, processing, and retrieval. Develop and enforce best practices for data modeling, transformation, and version control. Monitor and improve data pipeline reliability, performance, and data quality. Implement access controls, data governance, and documentation across the data stack. Perform code reviews and contribute to the overall architecture of the data platform. Stay up to date with industry trends and emerging technologies in the modern data stack. Qualifications: Bachelor's degree in Computer Science, Engineering, Information Systems, or a related field. 5+ years of experience in data engineering or a related field. Strong expertise in Snowflake data warehouse architecture, features, and optimization. Hands-on experience with DBT for data transformation and modeling. Proficiency in SQL and experience with data pipeline orchestration tools (e.g., Airflow, Prefect). Familiarity with cloud platforms (AWS, GCP, or Azure), especially data services. Understanding of data warehousing concepts, dimensional modeling, and modern ELT practices. Experience with version control systems (e.g., Git) and CI/CD workflows. Required Skills: Expertise in Snowflake and DBT. Strong SQL skills. Experience with data pipeline orchestration tools. Familiarity with cloud platforms. Preferred Skills: Experience with data governance and documentation. Knowledge of modern data stack technologies. Pay range and compensation package: Competitive salary based on experience and qualifications. Equal Opportunity Statement: We are committed to creating a diverse and inclusive workplace. We encourage applications from all qualified individuals regardless of race, gender, age, sexual orientation, disability, or any other characteristic protected by law. ```
Posted 1 week ago
175.0 years
0 Lacs
Gurgaon
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? Expertise with handling large volumes of data coming from many different disparate systems Expertise with Core Java , multithreading , backend processing , transforming large data volumes Working knowledge of Apache Flink , Apache Airflow , Apache Beam, open source data processing platforms Working knowledge of cloud platforms like GCP. Working knowledge of databases and performance tuning for complex big data scenarios - Singlestore DB and In Memory Processing Cloud Deployments , CI/CD and Platform Resiliency Good experience with Mvel Excellent communication skills , collaboration mindset and ability to work through unknowns Work with key stakeholders to drive data solutions that align to strategic roadmaps, prioritized initiatives and strategic Technology directions. Own accountability for all quality aspects and metrics of product portfolio, including system performance, platform availability, operational efficiency, risk management, information security, data management and cost effectiveness. Minimum Qualifications: Bachelor’s degree in computer science, Computer Science Engineering, or related field is required. 3+ years of large-scale technology engineering and formal management in a complex environment and/or comparable experience. To be successful in this role you will need to be good in Java, Flink, SQ, KafkaL & GCP Successful engineering and deployment of enterprise-grade technology products in an Agile environment. Large scale software product engineering experience with contemporary tools and delivery methods (i.e. DevOps, CD/CI, Agile, etc.). 3+ years' experience in a hands-on engineering in Java and data/distributed eco-system. Ability to see the big picture with attention given to critical details. Preferred Qualifications: Knowledge on Kafka, Spark Finance domain knowledge We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 1 week ago
4.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Data Engineer (Python) As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We are currently seeking a seasoned Data Engineer with a good experience in Python to join our team of professionals. Key Responsibilities: Develop Data Lake tables leveraging AWS Glue and Spark for efficient data management. Implement data pipelines using Airflow, Kubernetes, and various AWS services Must Have Skills: Experience in deploying and managing data warehouses Advanced proficiency of at least 4 years in Python for data analysis and organization Solid understanding of AWS cloud services Proficient in using Apache Spark for large-scale data processing Skills and Qualifications Needed: Practical experience with Apache Airflow for workflow orchestration Demonstrated ability in designing, building, and optimizing ETL processes, data pipelines, and data architectures Flexible, self-motivated approach with strong commitment to problem resolution. Excellent written and oral communication skills, with the ability to deliver complex information in a clear and effective manner to a range of different audiences. Willingness to work globally and across different cultures, and to participate in all stages of the data solution delivery lifecycle, including pre-studies, design, development, testing, deployment, and support. Nice to have exposure to Apache Druid Familiarity with relational database systems, Desired Work Experience : A degree in computer science or a similar field What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
7.0 - 12.0 years
15 - 27 Lacs
Bengaluru
Remote
THIS IS A FULLY REMOTE JOB WITH 5 DAYS WORK WEEK. THIS IS A ONE YEAR CONTRACT JOB, LIKELY TO BE CONTINUED AFTER ONE YEAR. Required Qualifications Education: B.Tech /M.Tech in Computer Science, Data Engineering, or equivalent field. Experience: 7-10 years in data engineering, with 2+ years in an industrial/operations-heavy environment (manufacturing, energy, supply chain, etc.) Job Role Senior Data Engineer will be responsible for independently designing, developing, and deploying scalable data infrastructure to support analytics, optimization, and AI-driven use cases in a low-tech maturity environment . You will own the data architecture end-to-end , work closely with data scientists , full stack engineers , and operations teams , and be a driving force in creating a robust Industry 4.0-ready data backbone. Key Responsibilities 1. Data Architecture & Infrastructure Design and implement a scalable, secure, and future-ready data architecture from scratch. Lead the selection, configuration, and deployment of data lakes, warehouses (e.g., AWS Redshift, Azure Synapse), and ETL/ELT pipelines. Establish robust data ingestion pipelines from PLCs, DCS systems, SAP, Excel files, and third-party APIs. Ensure data quality, governance, lineage, and metadata management. 2. Data Engineering & Tooling Build and maintain modular, reusable ETL/ELT pipelines using Python, SQL, Apache Airflow, or equivalent. Set up real-time and batch processing capabilities using tools such as Kafka, Spark, or Azure Data Factory. Deploy and maintain scalable data storage solutions and optimize query performance. Tech Stack Strong hands-on expertise in: Python, SQL, Spark, Pandas ETL tools: Airflow, Azure Data Factory, or equivalent Cloud platforms: Azure (preferred), AWS or GCP Databases: PostgreSQL, MS SQL Server, NoSQL (MongoDB, etc.) Data lakes/warehouses: S3, Delta Lake, Snowflake, Redshift, BigQuery Monitoring and Logging: Prometheus, Grafana, ELK, etc.
Posted 1 week ago
6.0 - 10.0 years
7 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Data Engineering, AirFlow, Fivetran, CI/CD using We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, our vision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineering groups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet business needs. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requirements. Added bonus if you also have: A good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data Location-remote,Delhi NCR,Bengaluru,Chennai,Pune,Kolkata,Ahmedabad,Mumbai, Hyderabad
Posted 1 week ago
6.0 - 10.0 years
7 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Data Engineering, AirFlow, Fivetran, CI/CD using We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, ourvision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineering groups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet business needs. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requirements. Added bonus if you also have: A good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data Location- Remote,Delhi NCR,Bengaluru,Chennai,Pune,Kolkata,Ahmedabad,Mumbai, Hyderabad
Posted 1 week ago
6.0 - 10.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Join us as a Data Engineer, PySpark, AWS Were looking for someone to build effortless, digital first customer experiences to help simplify our organisation and keep our data safe and secure Day-to-day, youll develop innovative, data-driven solutions through data pipelines, modelling and ETL design while inspiring to be commercially successful through insights If youre ready for a new challenge, and want to bring a competitive edge to your career profile by delivering streaming data ingestions, this could be the role for you We're offering this role at associate vice president level What youll do Your daily responsibilities will include you developing a comprehensive knowledge of our data structures and metrics, advocating for change when needed for product development Youll also provide transformation solutions and carry out complex data extractions, Well expect you to develop a clear understanding of data platform cost levels to build cost-effective and strategic solutions Youll also source new data by using the most appropriate tooling before integrating it into the overall solution to deliver it to our customers, Youll Also Be Responsible For Driving customer value by understanding complex business problems and requirements to correctly apply the most appropriate and reusable tools to build data solutions Participating in the data engineering community to deliver opportunities to support our strategic direction Carrying out complex data engineering tasks to build a scalable data architecture and the transformation of data to make it usable to analysts and data scientists Building advanced automation of data engineering pipelines through the removal of manual stages Leading on the planning and design of complex products and providing guidance to colleagues and the wider team when required The skills youll need To be successful in this role, youll have an understanding of data usage and dependencies with wider teams and the end customer Youll also have experience of extracting value and features from large scale data, You'll need at least eight years of experience working with Python, PySpark and SQL You'll also need experience in AWS architecture using EMR, EC2, S3, Lambda and Glue You'll also need experience in Apache Airflow, Anaconda and Sagemaker, Youll Also Need Experience of using programming languages alongside knowledge of data and software engineering fundamentals Experience with Performance optimization and tuning Good knowledge of modern code development practices Great communication skills with the ability to proactively engage with a range of stakeholders Show
Posted 1 week ago
6.0 - 10.0 years
8 - 12 Lacs
Gurugram
Work from Office
Join us as a Data Engineer, PySpark, AWS Were looking for someone to build effortless, digital first customer experiences to help simplify our organisation and keep our data safe and secure Day-to-day, youll develop innovative, data-driven solutions through data pipelines, modelling and ETL design while inspiring to be commercially successful through insights If youre ready for a new challenge, and want to bring a competitive edge to your career profile by delivering streaming data ingestions, this could be the role for you We're offering this role at associate vice president level What youll do Your daily responsibilities will include you developing a comprehensive knowledge of our data structures and metrics, advocating for change when needed for product development Youll also provide transformation solutions and carry out complex data extractions, Well expect you to develop a clear understanding of data platform cost levels to build cost-effective and strategic solutions Youll also source new data by using the most appropriate tooling before integrating it into the overall solution to deliver it to our customers, Youll Also Be Responsible For Driving customer value by understanding complex business problems and requirements to correctly apply the most appropriate and reusable tools to build data solutions Participating in the data engineering community to deliver opportunities to support our strategic direction Carrying out complex data engineering tasks to build a scalable data architecture and the transformation of data to make it usable to analysts and data scientists Building advanced automation of data engineering pipelines through the removal of manual stages Leading on the planning and design of complex products and providing guidance to colleagues and the wider team when required The skills youll need To be successful in this role, youll have an understanding of data usage and dependencies with wider teams and the end customer Youll also have experience of extracting value and features from large scale data, You'll need at least eight years of experience working with Python, PySpark and SQL You'll also need experience in AWS architecture using EMR, EC2, S3, Lambda and Glue You'll also need experience in Apache Airflow, Anaconda and Sagemaker, Youll Also Need Experience of using programming languages alongside knowledge of data and software engineering fundamentals Experience with Performance optimization and tuning Good knowledge of modern code development practices Great communication skills with the ability to proactively engage with a range of stakeholders Show
Posted 1 week ago
8.0 years
3 - 4 Lacs
Noida
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Develop comprehensive digital analytics solutions utilizing Adobe Analytics for web tracking, measurement, and insight generation Design, manage, and optimize interactive dashboards and reports using Power BI to support business decision-making Lead the design, development, and maintenance of robust ETL/ELT pipelines integrating diverse data sources Architect scalable data solutions leveraging Python for automation, scripting, and engineering tasks Oversee workflow orchestration using Apache Airflow to ensure timely and reliable data processing Provide leadership and develop robust forecasting models to support sales and marketing strategies Develop advanced SQL queries for data extraction, manipulation, analysis, and database management Implement best practices in data modeling and transformation using Snowflake and DBT; exposure to Cosmos DB is a plus Ensure code quality through version control best practices using GitHub Collaborate with cross-functional teams to understand business requirements and translate them into actionable analytics solutions Stay updated with the latest trends in digital analytics; familiarity or hands-on experience with Adobe Experience Platform (AEP) / Customer Journey Analytics (CJO) is highly desirable Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Master’s or Bachelor’s degree in Computer Science, Information Systems, Engineering, Mathematics, Statistics, Business Analytics, or a related field 8+ years of progressive experience in digital analytics, data analytics or business intelligence roles Experience with data modeling and transformation using tools such as DBT and Snowflake; familiarity with Cosmos DB is a plus Experience developing forecasting models and conducting predictive analytics to drive business strategy Advanced proficiency in web and digital analytics platforms (Adobe Analytics) Proficiency in ETL/ELT pipeline development and workflow orchestration (Apache Airflow) Skilled in creating interactive dashboards and reports using Power BI or similar BI tools Deep understanding of digital marketing metrics, KPIs, attribution models, and customer journey analysis Industry certifications relevant to digital analytics or cloud data platforms Ability to deliver clear digital reporting and actionable insights to stakeholders at all organizational levels At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #NJP
Posted 1 week ago
5.0 - 6.0 years
0 Lacs
Andhra Pradesh
On-site
Title: Developer (AWS Engineer) Requirements: Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Strong hands-on experience Proficient in Node.js and Python Seasoned developers capable of independently driving development tasks Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Good to have Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS services such as Redshift, RDS, S3, Glue ETL, Glue Data Catalog, EMR, PySpark, Python, Lake formation, Airflow, SQL scripts, etc Good to have Experience with building data analytical platform using Databricks (data pipelines), Starburst (semantic layer) on AWS cloud environment Experience with orchestration of workflows in an enterprise environment. Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Working experience with Agile Methodology Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
5.0 - 6.0 years
0 Lacs
Andhra Pradesh
On-site
Title: Developer (AWS Engineer) Requirements: Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Strong hands-on experience Proficient in Node.js and Python Seasoned developers capable of independently driving development tasks Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Good to have Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS services such as Redshift, RDS, S3, Glue ETL, Glue Data Catalog, EMR, PySpark, Python, Lake formation, Airflow, SQL scripts, etc Good to have Experience with building data analytical platform using Databricks (data pipelines), Starburst (semantic layer) on AWS cloud environment Experience with orchestration of workflows in an enterprise environment. Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Working experience with Agile Methodology Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position - Technical Architect Location - Pune Experience - 6+ Years ABOUT HASHEDIN We are software engineers who solve business problems with a Product Mindset for leading global organizations. By combining engineering talent with business insight, we build software and products that can create new enterprise value. The secret to our success is a fast-paced learning environment, an extreme ownership spirit, and a fun culture. WHY SHOULD YOU JOIN US? With the agility of a start-up and the opportunities of an enterprise, every day at HashedIn, your work will make an impact that matters. So, if you are a problem solver looking to thrive in a dynamic fun culture of inclusion, collaboration, and high performance – HashedIn is the place to be! From learning to leadership, this is your chance to take your software engineering career to the next level. JOB TITLE - Technical Architect B.E/B.Tech, MCA, M.E/M.Tech graduate with 6 -10 Years of experience (This includes 4 years of experience as an application architect or data architect) • Java/Python/UI/DE • GCP/AWS/AZURE • Generative AI-enabled application design pattern knowledge is a value addition. • Excellent technical background with a breadth of knowledge across analytics, cloud architecture, distributed applications, integration, API design, etc • Experience in technology stack selection and the definition of solution, technology, and integration architectures for small to mid-sized applications and cloud-hosted platforms. • Strong understanding of various design and architecture patterns. • Strong experience in developing scalable architecture. • Experience implementing and governing software engineering processes, practices, tools, and standards for development teams. • Proficient in effort estimation techniques; will actively support project managers and scrum masters in planning the implementation and will work with test leads on the definition of an appropriate test strategy for the realization of a quality solution. • Extensive experience as a technology/ engineering subject matter expert i. e. high level • Solution definition, sizing, and RFI/RFP responses. • Aware of the latest technology trends, engineering processes, practices, and metrics. • Architecture experience with PAAS and SAAS platforms hosted on Azure AWS or GCP. • Infrastructure sizing and design experience for on-premise and cloud-hosted platforms. • Ability to understand the business domain & requirements and map them to technical solutions. • Outstanding interpersonal skills. Ability to connect and present to CXOs from client organizations. • Strong leadership, business communication consulting, and presentation skills. • Positive, service-oriented personality OVERVIEW OF THE ROLE: This role serves as a paradigm for the application of team software development processes and deployment procedures. Additionally, the incumbent actively contributes to the establishment of best practices and methodologies within the team. Craft & deploy resilient APIs, bridging cloud infrastructure & software development with seamless API design, development, & deployment • Works at the intersection of infrastructure and software engineering by designing and deploying data and pipeline management frameworks built on top of open-source components, including Hadoop, Hive, Spark, HBase, Kafka streaming, Tableau, Airflow, and other cloud-based data engineering services like S3, Redshift, Athena, Kinesis, etc. • Collaborate with various teams to build and maintain the most innovative, reliable, secure, and cost-effective distributed solutions. • Design and develop big data and real-time analytics and streaming solutions using industry-standard technologies. • Deliver the most complex and valuable components of an application on time as per the specifications. • Plays the role of a Team Lead, manages, or influences a large portion of an account or small project in its entirety, demonstrating an understanding of and consistently incorporating practical value with theoretical knowledge to make balanced technical decisions
Posted 1 week ago
4.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Data Engineer (Python) As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We are currently seeking a seasoned Data Engineer with a good experience in Python to join our team of professionals. Key Responsibilities: Develop Data Lake tables leveraging AWS Glue and Spark for efficient data management. Implement data pipelines using Airflow, Kubernetes, and various AWS services Must Have Skills: Experience in deploying and managing data warehouses Advanced proficiency of at least 4 years in Python for data analysis and organization Solid understanding of AWS cloud services Proficient in using Apache Spark for large-scale data processing Skills and Qualifications Needed: Practical experience with Apache Airflow for workflow orchestration Demonstrated ability in designing, building, and optimizing ETL processes, data pipelines, and data architectures Flexible, self-motivated approach with strong commitment to problem resolution. Excellent written and oral communication skills, with the ability to deliver complex information in a clear and effective manner to a range of different audiences. Willingness to work globally and across different cultures, and to participate in all stages of the data solution delivery lifecycle, including pre-studies, design, development, testing, deployment, and support. Nice to have exposure to Apache Druid Familiarity with relational database systems, Desired Work Experience : A degree in computer science or a similar field What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
7.0 - 12.0 years
14 - 24 Lacs
Pune
Work from Office
vConstruct, a Pune-based Construction Technology company is seeking a Senior Data Engineer for its Data Science and Analytics team, a close-knit group of analysts and engineers supporting all data aspects of the business. You will be responsible for designing, developing, and maintaining our data infrastructure, ensuring data integrity, and supporting various data-driven projects. You will work closely with cross-functional teams to integrate, process, and manage data from various sources, enabling business insights and enhancing operational efficiency. Responsibilities Lead the end-to-end design and development of scalable, high-performance data pipelines and ETL/ELT frameworks aligned with modern data engineering best practices. Architect complex data integration workflows that bring together structured, semi-structured, and unstructured data from both cloud and on-premise sources. Build robust real-time, batch, and on-demand pipelines with built-in observabilitymonitoring, alerting, and automated error handling. Partner with analysts, data scientists, and business leaders to define and deliver reliable data models, quality frameworks, and SLAs that power key business insights. Ensure optimal pipeline performance and throughput, with clearly defined SLAs and proactive alerting for data delivery or quality issues. Collaborate with platform, DevOps, and architecture teams to build secure, reusable, and CI/CD-enabled data workflows that align with enterprise architecture standards. Establish and enforce the best practices in source control, code reviews, testing automation, and continuous delivery for all data engineering components. Lead root cause analysis (RCA) and preventive maintenance for critical data failures, ensuring minimal business impact and continuous service improvement. Guide the team in establishing standards for data modeling, transformation logic, and governance, ensuring long-term maintainability and scalability. Design and execute comprehensive testing strategiesunit, integration, and system testingensuring high data reliability and pipeline resilience. Monitor and fine-tune data pipeline and query performance, optimizing for reliability, scalability, and cost-efficiency. Create and maintain detailed technical documentation, including data architecture diagrams, process flows, and integration specifications for internal and external stakeholders. Facilitate and lead discussions with business and operational teams to understand data requirements, prioritize initiatives, and drive data strategy forward. Qualifications 7 to 10 years of hands-on experience in data engineering roles with a proven record of building scalable and secure data platforms. Over 5 years of experience in scripting languages such as Python for data processing, automation, and ETL development. 4+ years of experience with Snowflake, including performance tuning, security model design, and advanced SQL development. 5+ years of experience with data integration tools such as Azure Data Factory, Fivetran, or Matillion. 5+ years of experience in writing complex, highly optimized SQL queries on large datasets. Proven experience integrating and managing APIs, JSON, XML, and webhooks for data acquisition. Hands-on experience with cloud platforms (Azure/AWS) and orchestration tools like Apache Airflow or equivalent. Experience with CI/CD pipelines, automated testing, and code versioning tools (e.g., Git). Familiarity with dbt or similar transformation tools and best practices for modular transformation development. Exposure to data visualization tools like Power BI for supporting downstream analytics is a plus. Strong interpersonal and communication skills with the ability to lead discussions with technical and business stakeholders. Education Bachelor’s or Master’s degree in Computer Science/Information technology or related field. Equivalent academic and work experience can be considered. About vConstruct : vConstruct specializes in providing high quality Building Information Modeling and Construction Technology services geared towards construction projects. vConstruct is a wholly owned subsidiary of DPR Construction. For more information, please visit www.vconstruct.com About DPR Construction: DPR Construction is a national commercial general contractor and construction manager specializing in technically challenging and sustainable projects for the advanced technology, biopharmaceutical, corporate office, and higher education and healthcare markets. With the purpose of building great things, great teams, great buildings, great relationships—DPR is a truly great company. For more information, please visit www.dpr.com
Posted 1 week ago
5.0 - 10.0 years
25 - 35 Lacs
Gurugram
Hybrid
Job Title: Data Engineer Apache Spark, Scala, GCP & Azure Location: Gurugram (Hybrid 3 days/week in office) Experience: 5–10 Years Type: Full-time Apply: Share your resume with the details listed below to vijay.s@xebia.com Availability: Immediate joiners or max 2 weeks' notice period only About the Role Xebia is looking for a skilled Data Engineer to join our fast-paced team in Gurugram. You will work on building and optimizing scalable data pipelines, processing large datasets using Apache Spark and Scala , and deploying on cloud platforms like GCP and Azure . If you're passionate about clean architecture, high-quality data flow, and performance tuning, this is the opportunity for you. Key Responsibilities Design and develop robust ETL pipelines using Apache Spark Write clean and efficient data processing code in Scala Handle large-scale data movement, transformation, and storage Build solutions on Google Cloud Platform (GCP) and Microsoft Azure Collaborate with teams to define data strategies and ensure data quality Optimize jobs for performance and cost on distributed systems Document technical designs and ETL flows clearly for the team Must-Have Skills Apache Spark Scala ETL design & development Cloud platforms: GCP & Azure Strong understanding of Data Engineering best practices Solid communication and collaboration skills Good-to-Have Skills Apache tools (Kafka, Beam, Airflow, etc.) Knowledge of data lake and data warehouse concepts CI/CD for data pipelines Exposure to modern data monitoring and observability tools Why Xebia? At Xebia, you’ll be part of a forward-thinking, tech-savvy team working on high-impact, global data projects. We prioritize clean code, scalable solutions, and continuous learning. Join us to build real-time, cloud-native data platforms that power business intelligence across industries. To Apply Please share your updated resume and include the following details in your email to vijay.s@xebia.com : Full Name: Total Experience: Current CTC: Expected CTC: Current Location: Preferred Xebia Location: Gurugram Notice Period / Last Working Day (if serving): Primary Skills: LinkedIn Profile URL: Note: Only candidates who can join immediately or within 2 weeks will be considered. Build intelligent, scalable data solutions with Xebia – let’s shape the future of data together.
Posted 1 week ago
6.0 years
0 Lacs
Sanganer, Rajasthan, India
On-site
Unlock yourself. Take your career to the next level. At Atrium, we live and deliver at the intersection of industry strategy, intelligent platforms, and data science — empowering our customers to maximize the power of their data to solve their most complex challenges. We have a unique understanding of the role data plays in the world today and serve as market leaders in intelligent solutions. Our data-driven, industry-specific approach to business transformation for our customers places us uniquely in the market. Who are you? You are smart, collaborative, and take ownership to get things done. You love to learn and are intellectually curious in business and technology tools, platforms, and languages. You are energized by solving complex problems and bored when you don’t have something to do. You love working in teams and are passionate about pulling your weight to make sure the team succeeds. What will you be doing at Atrium? In this role, you will join the best and brightest in the industry to skillfully push the boundaries of what’s possible. You will work with customers to make smarter decisions through innovative problem-solving using data engineering, Analytics, and systems of intelligence. You will partner to advise, implement, and optimize solutions through industry expertise, leading cloud platforms, and data engineering. As a Snowflake Data Engineering Lead , you will be responsible for expanding and optimizing the data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. You will support the software developers, database architects, data analysts, and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. In This Role, You Will Lead the design and architecture of end-to-end data warehousing and data lake solutions, focusing on the Snowflake platform, incorporating best practices for scalability, performance, security, and cost optimization Assemble large, complex data sets that meet functional / non-functional business requirements Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Lead and mentor both onshore and offshore development teams, creating a collaborative environment Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, DBT, Python, AWS, and Big Data tools Development of ELT processes to ensure timely delivery of required data for customers Implementation of Data Quality measures to ensure accuracy, consistency, and integrity of data Design, implement, and maintain data models that can support the organization's data storage and analysis needs Deliver technical and functional specifications to support data governance and knowledge sharing In This Role, You Will Have Bachelor's degree in Computer Science, Software Engineering, or equivalent combination of relevant work experience and education 6+ years of experience delivering consulting services to medium and large enterprises. Implementations must have included a combination of the following experiences: Data Warehousing or Big Data consulting for mid-to-large-sized organizations 3+ years of experience specifically with Snowflake, demonstrating deep expertise in its core features and advanced capabilities Strong analytical skills with a thorough understanding of how to interpret customer business needs and translate those into a data architecture SnowPro Core certification is highly desired Hands-on experience with Python (Pandas, Dataframes, Functions) Strong proficiency in SQL (Stored Procedures, functions), including debugging, performance optimization, and database design Strong Experience with Apache Airflow and API integrations Solid experience in any one of the ETL/ELT tools (DBT, Coalesce, Wherescape, Mulesoft, Matillion, Talend, Informatica, SAP BODS, DataStage, Dell Boomi, etc.) Nice to have: Experience in Docker, DBT, data replication tools (SLT, Fivetran, Airbyte, HVR, Qlik, etc), Shell Scripting, Linux commands, AWS S3, or Big data technologies Strong project management, problem-solving, and troubleshooting skills with the ability to exercise mature judgment Enthusiastic, professional, and confident team player with a strong focus on customer success who can present effectively even under adverse conditions Strong presentation and communication skills Next Steps Our recruitment process is highly personalized. Some candidates complete the hiring process in one week, others may take longer, as it’s important we find the right position for you. It's all about timing and can be a journey as we continue to learn about one another. We want to get to know you and encourage you to be selective - after all, deciding to join a company is a big decision! At Atrium, we believe a diverse workforce allows us to match our growth ambitions and drive inclusion across the business. We are an equal opportunity employe,r and all qualified applicants will receive consideration for employment.
Posted 1 week ago
2.0 - 6.0 years
5 - 10 Lacs
Bengaluru
Work from Office
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career, Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express, How will you make an impact in this role In the role of Engineer I, you will be responsible for taking on the role of an individual contractor for the GCP applications which is critical in the Amex environment, Engineering Development strategic frameworks, processes, tools and actionable insights, As a Data Engineer, you will be responsible for designing, developing, and maintaining robust and scalable framework / services / application / pipelines for processing huge volume of data You will work closely with cross-functional teams to deliver high-quality software solutions that meet our organizational needs, GCP Architecture design and build solutions, SQL, PySpark, Python, Cloud technologies Design and develop solutions using Bigdata tools and technologies like MapReduce, Hive, Spark etc Ensure the performance, quality, and responsiveness of solutions, Participate in code reviews to maintain code quality, Conduct IT requirements gathering, Define problems and provide solution alternatives, Create detailed computer system design documentation, Implement deployment plan, Conduct knowledge transfer with the objective of providing high-quality IT consulting solutions Support consulting team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation, design and deployment, Under supervision participate in unit-level and organizational initiatives with the objective of providing high-quality and value adding consulting solutions, Understand issues and diagnose root-cause of issues Perform secondary research as instructed by supervisor to assist in strategy and business planning, Minimum Qualifications: 8+ years of experience in cloud applications with experience in leading a team Industry knowledge on GCP Cloud Applications and deployment, Bachelors degree in Computer Science Engineering, or a related field, Should be able to write shell scripts, Utilize Git for source version control, Set up and maintain CI/CD pipelines, Troubleshoot, debug, and upgrade existing application & ETL job chains, Ability to effectively interpret technical and business objectives and challenges and articulate solutions Experience with managing teams and balance multiple priorities, Willingness to learn new technologies and exploit them to their optimal potential Strong experience with Data Engineering, Big Data Applications Strong background with Python, PySpark , Java , Airflow , Spark , PL/SQL, Airflow Dags Cloud experience with GCP is must Excellent communication and analytical skills Excellent team-player with ability to work with global team Preferred Qualifications: Proven experience as Data Engineer or similar role, Strong proficiency in Object Oriented programming using Python, Experience with ETL jobs design principles, Solid understanding of HQL, SQL and data modelling, Knowledge on Unix/Linux and Shell scripting principles, We back you with benefits that support your holistic well-being so you can be and deliver your best This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law, Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations, Show
Posted 1 week ago
8.0 years
0 Lacs
India
Remote
Job Title: Data Engineer (Remote) Working Hours: 4-hour overlap with EST (9 AM–1 PM) Type: Full-Time | Department: Engineering We’re hiring skilled Data Engineers to join our remote tech team. You'll develop scalable, cloud-based data products and lead small teams to deliver high-impact solutions. Ideal candidates bring deep technical expertise and a passion for innovation. Key Responsibilities: Build and optimize scalable data systems and pipelines Design APIs for data integration Lead a small development team, conduct code reviews, mentor juniors Collaborate with cross-functional teams Contribute to architecture and system design Must-Have Skills: 8+ years in Linux, Bash, Python, SQL 4+ years in Spark, Hadoop ecosystem 4+ years with AWS (EMR, Glue, Athena, Redshift) Team leadership experience Preferred: Experience with dbt, Airflow, Hive, data cataloging tools Knowledge of GCP, scalable pipelines, data partitioning/clustering BS/MS/PhD in CS or equivalent experience
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Biz2X Biz2X is the leading digital lending platform, enabling financial providers to power growth with a modern omni-channel experience, best-in-class risk management tools and a comprehensive yet flexible Servicing engine. The company partners with financial institutions to support their Digital Transformation efforts with Biz2X’s digital lending platform. Biz2X solutions not only reduces operational expenses, but accelerates lending growth by significantly improving client experience, reducing total turnaround time, and equipping the relationship managers with powerful monitoring insights and alerts Read Our Latest Press Release : Press Release - Biz 2X Job Overvi ew: We are seeking for a Senior Engineer – AI/ML to drive the development and deployment of sophisticated AI solutions in our fintech products. You will oversee MLOps pipelines and manage large language models (LLMs) to enhance our financial technology services. Key Responsibilities: AI/ML Development: Design and implement advanced ML models for applications including fraud detection, credit scoring, and algorithmic trading. MLOps: Develop and manage MLOps pipelines using tools such as MLflow, Kubeflow, and Airflow for CI/CD, model monitoring, and automation. LLMOps: Optimize and operationalize LLMs (e.g., GPT-4, BERT) for fintech applications like automated customer support and sentiment analysis. Collaboration: Work with product managers, data engineers, and business analysts to align technical solutions with business objectives. Qualifications: Experience: 4-6 years in AI, ML, MLOps, and LLMOps with a focus on fintech. Technical Skills: Expertise in TensorFlow, PyTorch, scikit-learn, and MLOps tools (MLflow, Kubeflow). Proficiency in large language models (LLMs) and cloud platforms (AWS, GCP, Azure). Strong programming skills in Python, Java, or Scala. Experience in building RAG pipelines, NLP, OCR and Pyspark Good to have: Production GenAI Experience in Fintech/Lending domain
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
JD - Data Engineer Pattern values data and the engineering required to take full advantage of it. As a Data Engineer at Pattern, you will be working on business problems that have a huge impact on how the company maintains its competitive edge. Essential Duties And Responsibilities Develop, deploy, and support real-time, automated, scalable data streams from a variety of sources into the data lake or data warehouse. Develop and implement data auditing strategies and processes to ensure data quality; identify and resolve problems associated with large-scale data processing workflows; implement technical solutions to maintain data pipeline processes and troubleshoot failures. Collaborate with technology teams and partners to specify data requirements and provide access to data. Tune application and query performance using profiling tools and SQL or other relevant query languages. Understand business, operations, and analytics requirements for data Build data expertise and own data quality for assigned areas of ownership Work with data infrastructure to triage issues and drive to resolution Required Qualifications Bachelor’s Degree in Data Science, Data Analytics, Information Management, Computer Science, Information Technology, related field, or equivalent professional experience Overall experience should be more than 4 + years 3+ years of experience working with SQL 3+ years of experience in implementing modern data architecture-based data warehouses 2+ years of experience working with data warehouses such as Redshift, BigQuery, or Snowflake and understand data architecture design Excellent software engineering and scripting knowledge Strong communication skills (both in presentation and comprehension) along with the aptitude for thought leadership in data management and analytics Expertise with data systems working with massive data sets from various data sources Ability to lead a team of Data Engineers Preferred Qualifications Experience working with time series databases Advanced knowledge of SQL, including the ability to write stored procedures, triggers, analytic/windowing functions, and tuning Advanced knowledge of Snowflake, including the ability to write and orchestrate streams and tasks Background in Big Data, non-relational databases, Machine Learning and Data Mining Experience with cloud-based technologies including SNS, SQS, SES, S3, Lambda, and Glue Experience with modern data platforms like Redshift, Cassandra, DynamoDB, Apache Airflow, Spark, or ElasticSearch Expertise in Data Quality and Data Governance Our Core Values Data Fanatics: Our edge is always found in the data Partner Obsessed: We are obsessed with partner success Team of Doers: We have a bias for action Game Changers: We encourage innovation About Pattern Pattern is the premier partner for global e-commerce acceleration and is headquartered in Utah's Silicon Slopes tech hub—with offices in Asia, Australia, Europe, the Middle East, and North America. Valued at $2 billion, Pattern has been named one of the fastest-growing tech companies in North America by Deloitte and one of the best-led companies in America by Inc. More than 100 global brands—like Nestle, Sylvania, Kong, Panasonic, and Sorel —rely on Pattern's global e-commerce acceleration platform to scale their business around the world. We place employee experience at the center of our business model and have been recognized as one of America's Most Loved Workplaces®. https://pattern.com/
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France