Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 4.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Build data integrations, data models to support analytical needs for this project. below. Translate business requirements into technical requirements as needed Design and develop automated scripts for data pipelines to process and transform as per the requirements and monitor those Produce artifacts such as data flow diagrams, designs, data model along with git code as deliverable Use tools or programming languages such as SQL, Python, Snowflake, Airflow, dbt, Salesforce Data cloud Ensure data accuracy, timeliness, and reliability throughout the pipeline. Complete QA, data profiling to ensure data is ready as per the requirements for UAT Collaborate with stakeholders on business, Visualization team and support enhancements Timely updates on the sprint boards, task updates Team lead to provide timely project updates on all the projects Project experience with version control systems and CICD such as GIT, GitFlow, Bitbucket, Jenkins etc. Participate in UAT to resolve findings and plan Go Live/Production deployment Milestones: Data Integration Plan into Data Cloud for structured and unstructured data/RAG needs for the Sales AI use cases Design Data Models and semantic layer on Salesforce AI Agentforce Prompt Integration Data Quality and sourcing enhancements Write Agentforce Prompts and refine as needed Assist decision scientist on the data needs Collaborate with EA team and participate in design review Performance Tuning and Optimization of Data Pipelines Hypercare after the deployment Project Review and Knowledge Transfer
Posted 3 weeks ago
5.0 - 8.0 years
15 - 30 Lacs
Pune
Hybrid
Skills- Data Engineer, Azure Data Factory (ADF), SQL, Power BI, SSRS, SSIS, SSAS, ETL, Data Bricks, Data Integration, Data Model
Posted 3 weeks ago
3.0 - 5.0 years
3 - 8 Lacs
Indore, Dewas, Pune
Work from Office
Key Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines for data ingestion and processing Work with stakeholders to understand data requirements and translate them into technical solutions Ensure data quality, reliability, and governance Optimize data storage and retrieval for performance and cost efficiency Collaborate with Data Scientists, Analysts, and Developers to support their data needs Maintain and enhance data architecture to support business growth Required Skills: Strong experience with SQL and relational databases (MySQL, PostgreSQL, etc.) Hands-on experience with Big Data technologies (Spark, Hadoop, Hive, etc.) Proficiency in Python/Scala/Java for data engineering tasks Experience with cloud platforms (AWS, Azure, or GCP) Familiarity with data warehouse solutions (Redshift, Snowflake, BigQuery, etc.) Knowledge of workflow orchestration tools (Airflow, Luigi, etc.) Good to Have: Experience with real-time data streaming (Kafka, Flink, etc.) Understanding of CI/CD and DevOps practices for data workflows Exposure to data security, compliance, and data privacy practices Qualifications: Bachelors/Master’s degree in Computer Science, IT, or a related field Minimum 3 years of experience in data engineering or related field
Posted 3 weeks ago
8.0 - 11.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Data Engineering Associate Advisor - HIH - Evernorth Position Summary: Data Engineering Advisor demonstrates expertise in data engineering technologies with the focus on engineering, innovation, strategic influence and product mindset. This individual will act as key contributor of the team to design, build, test and deliver large-scale software applications, systems, platforms, services or technologies in the data engineering space. This individual will have the opportunity to work directly with partner IT and business teams, owning and driving major deliverables across all aspects of software delivery. The candidate will play a key role in automating the processes on Databricks and AWS. They collaborate with business and technology partners in gathering requirements, develop and implement. The individual must have strong analytical and technical skills coupled with the ability to positively influence on delivery of data engineering products. The applicant will be working in a team that demands innovation, cloud-first, self-service-first, and automation-first mindset coupled with technical excellence. The applicant will be working with internal and external stakeholders and customers for building solutions as part of Enterprise Data Engineering and will need to demonstrate very strong technical and communication skills. Delivery – Intermediate delivery skills including the ability to deliver work at a steady, predictable pace to achieve commitments, decompose work assignments into small batch releases and contribute to tradeoff and negotiation discussions. Domain Expertise – Demonstrated track record of domain expertise including the ability to understand technical concepts necessary to do the job effectively, demonstrate willingness, cooperation, and concern for business issues and possess in-depth knowledge of immediate systems worked on. Problem Solving – Proven problem solving skills including debugging skills, allowing you to determine source of issues in unfamiliar code or systems and the ability to recognize and solve repetitive problems rather than working around them, recognize mistakes using them as learning opportunities and break down large problems into smaller, more manageable ones & Responsibilities : The candidate will be responsible to deliver business needs end to end from requirements to development into production. Through hands-on engineering approach in Databricks environment, this individual will deliver data engineering toolchains, platform capabilities and reusable patterns. The applicant will be responsible to follow software engineering best practices with an automation first approach and continuous learning and improvement mindset. The applicant will ensure adherence to enterprise architecture direction and architectural standards. The applicant should be able to collaborate in a high-performing team environment, and an ability to influence and be influenced by others. Experience Required: 8 to 11 years of experience in software engineering, building data engineering pipelines, middleware and API development and automation More than 3 years of experience in Databricks within an AWS environment Data Engineering experience Experience Desired: Expertise in Agile software development principles and patterns Expertise in building streaming, batch and event-driven architectures and data pipelines Primary Skills: Expertise in Big data technologies such as Spark, Hadoop, Databricks, Snowflake, EMR, Glue Good understanding of Kafka, Kafka Streams, Spark Structured streaming, configuration-driven data transformation and curation Expertise in building cloud-native microservices, containers, Kubernetes and platform-as-a-service technologies such as OpenShift, CloudFoundry Experience in multi-cloud software-as-a-service products such as Databricks, Snowflake Experience in Infrastructure-as-Code (IaC) tools such as terraform and AWS cloudformation Experience in messaging systems such as Apache ActiveMQ, WebSphere MQ, Apache Artemis, Kafka, AWS SNS Experience in API and microservices stack such as Spring Boot, Quarkus, Expertise in Cloud technologies such as AWS Glue, Lambda, S3, Elastic Search, API Gateway, CloudFront Experience with one or more of the following programming and scripting languages – Python, Scala, JVM-based languages, or JavaScript, and ability to pick up new languages Experience in building CI/CD pipelines using Jenkins, Github Actions Strong expertise with source code management and its best practices Proficient in self-testing of applications, unit testing and use of mock frameworks, test-driven development (TDD) Knowledge on Behavioral Driven Development (BDD) approach Additional Skills: Cloud-based security principles and protocols like OAuth2, JWT, data encryption, hashing data, secret management, etc Ability to perform detailed analysis of business problems and technical environments Strong oral and written communication skills Ability to think strategically, implement iteratively and estimate financial impact of design/architecture alternatives Continuous focus on an on-going learning and developmen About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 3 weeks ago
8.0 - 13.0 years
3 - 6 Lacs
Bengaluru
Work from Office
Location Bengaluru : We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in data engineering, with a strong focus on Databricks, Python, and SQL. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining our data infrastructure to support various business needs. Key Responsibilities Develop and implement efficient data pipelines and ETL processes to migrate and manage client, investment, and accounting data in Databricks Work closely with the investment management team to understand data structures and business requirements, ensuring data accuracy and quality. Monitor and troubleshoot data pipelines, ensuring high availability and reliability of data systems. Optimize database performance by designing scalable and cost-effective solutions. What s on offer Competitive salary and benefits package. Opportunities for professional growth and development. A collaborative and inclusive work environment. The chance to work on impactful projects with a talented team. Candidate Profile Experience: 8+ years of experience in data engineering or a similar role. Proficiency in Apache Spark. Databricks Data Cloud, including schema design, data partitioning, and query optimization Exposure to Azure. Exposure to Streaming technologies. (e.g Autoloader, DLT Streaming) Advanced SQL, data modeling skills and data warehousing concepts tailored to investment management data (e.g., transaction, accounting, portfolio data, reference data etc). Experience with ETL/ELT tools like snap logic and programming languages (e.g., Python, Scala, R programing). Familiarity workload automation and job scheduling tool such as Control M. Familiar with data governance frameworks and security protocols. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Education Bachelor s degree in computer science, IT, or a related discipline. Not Ready to Apply Join our talent pool and we'll reach out when a job fits your skills.
Posted 3 weeks ago
1.0 - 3.0 years
3 - 5 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and performing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has deep technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a crucial team member that assists in design and development of the data pipeline Build data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications: Masters degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: Must-Have Skills: Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Solid understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Good understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Professional Certifications Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills
Posted 3 weeks ago
9.0 - 13.0 years
11 - 15 Lacs
Hyderabad
Work from Office
Role Description: We are seeking a seasoned Engineering Manager (Data Engineering) to lead the end-to-end management of enterprise data assets and operational data workflows. This role is critical in ensuring the availability, quality, consistency, and timeliness of data across platforms and functions, supporting analytics, reporting, compliance, and digital transformation initiatives. You will be responsible for the day-to-day data operations, manage a team of data professionals, and drive process excellence in data intake, transformation, validation, and delivery. You will work closely with cross-functional teams including data engineering, analytics, IT, governance, and business stakeholders to align operational data capabilities with enterprise needs. Roles & Responsibilities: Lead and manage the enterprise data operations team, responsible for data ingestion, processing, validation, quality control, and publishing to various downstream systems. Define and implement standard operating procedures for data lifecycle management, ensuring accuracy, completeness, and integrity of critical data assets. Oversee and continuously improve daily operational workflows, including scheduling, monitoring, and troubleshooting data jobs across cloud and on-premise environments. Establish and track key data operations metrics (SLAs, throughput, latency, data quality, incident resolution) and drive continuous improvements. Partner with data engineering and platform teams to optimize pipelines, support new data integrations, and ensure scalability and resilience of operational data flows. Collaborate with data governance, compliance, and security teams to maintain regulatory compliance, data privacy, and access controls. Serve as the primary escalation point for data incidents and outages, ensuring rapid response and root cause analysis. Build strong relationships with business and analytics teams to understand data consumption patterns, prioritize operational needs, and align with business objectives. Drive adoption of best practices for documentation, metadata, lineage, and change management across data operations processes. Mentor and develop a high-performing team of data operations analysts and leads. Functional Skills: Must-Have Skills: Experience managing a team of data engineers in biotech/pharma domain companies. Experience in designing and maintaining data pipelines and analytics solutions that extract, transform, and load data from multiple source systems. Demonstrated hands-on experience with cloud platforms (AWS) and the ability to architect cost-effective and scalable data solutions. Experience managing data workflows in cloud environments such as AWS, Azure, or GCP. Strong problem-solving skills with the ability to analyze complex data flow issues and implement sustainable solutions. Working knowledge of SQL, Python, or scripting languages for process monitoring and automation. Experience collaborating with data engineering, analytics, IT operations, and business teams in a matrixed organization. Familiarity with data governance, metadata management, access control, and regulatory requirements (e.g., GDPR, HIPAA, SOX). Excellent leadership, communication, and stakeholder engagement skills. Well versed with full stack development & DataOps automation, logging frameworks, and pipeline orchestration tools. Strong analytical and problem-solving skills to address complex data challenges. Effective communication and interpersonal skills to collaborate with cross-functional teams. Good-to-Have Skills: Data Engineering Management experience in Biotech/Life Sciences/Pharma Experience using graph databases such as Stardog or Marklogic or Neo4J or Allegrograph, etc. Education and Professional Certifications Any Degree and 9-13 years of experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills
Posted 3 weeks ago
7.0 - 12.0 years
15 - 30 Lacs
Chennai
Hybrid
Join our Mega Tech Recruitment Drive at TaskUs Chennai - where bold ideas, real impact, and ridiculous innovation come together. Who are we hiring for? We are hiring for Developers, Senior Developers, Leads, Architects, and more. When is it happening? 24th June 2025, 9 AM to 4 PM IST. Which skills are we hiring for? Dot Net Full Stack: AWS/Azure + Angular/React/Vue.Js Oracle Fusion: Functional Finance (AP, AR, GL, CM and Tax) Senior Data Engineer: Tableau Dashboard / Clikview / PowerBi, Azure Databricks, PySpark, Databricks SQL, JupyterHub/ PyCharm. SQL Server Database Administrator: SQL Server Admin (Both Cloud & On-Prem) Workday Integration Developer: Workday integration tools (Studio, EIB), Workday Matrix, XML, XSLT Workday Configuration Lead Developer: Workday configuration tools (Studio, EIB), Workday Matrix, XML, XSLT, xPath, Simple, Matrix, Composite, Advanced About TaskUs: TaskUs is a provider of outsourced digital services and next-generation customer experience to fast-growing technology companies, helping its clients represent, protect and grow their brands. Leveraging a cloud-based infrastructure, TaskUs serves clients in the fastest-growing sectors, including social media, e-commerce, gaming, streaming media, food delivery, ride-sharing, HiTech, FinTech, and HealthTech. The People First culture at TaskUs has enabled the company to expand its workforce to approximately 45,000 employees globally. Presently, we have a presence in twenty-three locations across twelve countries, which include the Philippines, India, and the United States. What We Offer: At TaskUs, we prioritize our employees' well-being by offering competitive industry salaries and comprehensive benefits packages. Our commitment to a People First culture is reflected in the various departments we have established, including Total Rewards, Wellness, HR, and Diversity. We take pride in our inclusive environment and positive impact on the community. Moreover, we actively encourage internal mobility and professional growth at all stages of an employee's career within TaskUs. Join our team today and experience firsthand our dedication to supporting People First.
Posted 3 weeks ago
5.0 - 10.0 years
35 - 40 Lacs
Pune
Work from Office
: Job Title- Data Engineer (ETL, Big Data, Hadoop, Spark, GCP), AVP Location- Pune, India Role Description Engineer is responsible for developing and delivering elements of engineering solutions to accomplish business goals. Awareness is expected of the important engineering principles of the bank. Root cause analysis skills develop through addressing enhancements and fixes 2 products build reliability and resiliency into solutions through early testing peer reviews and automating the delivery life cycle. Successful candidate should be able to work independently on medium to large sized projects with strict deadlines. Successful candidates should be able to work in a cross application mixed technical environment and must demonstrate solid hands-on development track record while working on an agile methodology. The role demands working alongside a geographically dispersed team. The position is required as a part of the buildout of Compliance tech internal development team in India. The overall team will primarily deliver improvements in Com in compliance tech capabilities that are major components of the regular regulatory portfolio addressing various regulatory common commitments to mandate monitors. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Analyzing data sets and designing and coding stable and scalable data ingestion workflows also integrating into existing workflows Working with team members and stakeholders to clarify requirements and provide the appropriate ETL solution. Work as a senior developer for developing analytics algorithm on top of ingested data. Work as though senior developer for various data sourcing in Hadoop also GCP. Ensuring new code is tested both at unit level and system level design develop and peer review new code and functionality. Operate as a team member of an agile scrum team. Root cause analysis skills to identify bugs and issues for failures. Support Prod support and release management teams in their tasks. Your skills and experience: More than 6+ years of coding experience in experience and reputed organizations Hands on experience in Bitbucket and CI/CD pipelines Proficient in Hadoop, Python, Spark ,SQL Unix and Hive Basic understanding of on Prem and GCP data security Hands on development experience on large ETL/ big data systems .GCP being a big plus Hands on experience on cloud build, artifact registry ,cloud DNS ,cloud load balancing etc. Hands on experience on Data flow, Cloud composer, Cloud storage ,Data proc etc. Basic understanding of data quality dimensions like Consistency, Completeness, Accuracy, Lineage etc. Hands on business and systems knowledge gained in a regulatory delivery environment. Banking experience regulatory and cross product knowledge. Passionate about test driven development. Prior experience with release management tasks and responsibilities. Data visualization experience in tableau is good to have. How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs.
Posted 3 weeks ago
6.0 - 11.0 years
19 - 25 Lacs
Bengaluru
Work from Office
About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Working at Target means the opportunity to help all families discover the joy of everyday life. Caring for our communities is woven into who we are, and we invest in the places we collectively live, work and play. We prioritize relationships, fuel and develop talent by creating growth opportunities, and succeed as one Target team. At our core, our purpose is ingrained in who we are, what we value, and how we work. It s how we care, grow, and win together. Team overview : The Data platform at Target is used by 5,000+ Target Team members and Target s vast network of vendors to easily turn our data into a strategic advantage. Through the full end-to-end supply chain of data, from source to dashboard, we ensure each technical and non-technical person has the right tools to access, use, and communicate with data. Product Teams at Target Corporation are accountable for the delivery of business outcomes enabled through technology and analytic products that are easy to use, easily maintained and highly reliable. Product teams have one shared backlog that is inclusive of all product, technology and design work. Role overview As a Sr Product Manager , focused on the Data Platform Portfolio, you will be responsible for understanding the needs of our data engineering, platform engineering and ML Engineering teams. You will partner with your platform engineering teams and other PMs to build the infrastructure and the tools for a scalable, performant and reliable data platform that can power all the analytical, Data Science, AI and ML use-cases for Target. As a Product Manager for the data platform, you will be leading the efforts to identify the capabilities and features that are needed in the modern data platform,understand the consumption patterns for the data by all the personas and how would these capabilities be powered to solve the problems of the users. You will set the tone, vision, strategy, OKRs and prioritization for this capability. You will be the voice of the Customer with your product team and stakeholders to ensure that their needs are met, and you will be responsible to maintain and refine the product backlog (create user stories & acceptance criteria) while prioritizing the backlog to focus on the highest impact work for your team and stakeholders. You will encourage the open exchange of information and viewpoints, as well as inspire others to achieve challenging goals and high standards of performance while committing to the organization's direction. You will foster sense of urgency to achieve goals and leverage resources to overcome unexpected obstacles. Core responsibilities of this job are described within this job description. Job duties may change at any time due to business needs. About you Four-year degree in Computer Science Engineering or equivalent experience 6+ years of product management experience in Data Platforms and for developer focused products. Strongly prefer someone who has worked as a Data Engineer or as a Data Platform engineer for more than 3 years Strong understanding of Big Data and Cloud technologies including compute, storage and query layers Strong ability to influence others without direct authority Strong ability to identify and build great relationships with key users, leaders, and engineering teams Strong ability to work in an agile, collaborative and matrixed environment Proactive communication, both verbal and written, a must Proven track record of product leadership Understanding of product lifecycle and product startups a plus Useful Links: Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits
Posted 3 weeks ago
4.0 - 9.0 years
9 - 13 Lacs
Bengaluru
Work from Office
About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII: At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Team Overview: Every time a guest enters a Target store or browses Target.com nor the app, they experience the impact of Target s investments in technology and innovation. We re the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Join our global in-house technology team of more than 5,000 of engineers, data scientists, architects and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guests and we do so with a focus on diversity and inclusion, experimentation and continuous learning. At Target, we are gearing up for exponential growth and continuously expanding our guest experience. To support this expansion, Data Engineering is building robust warehouses and enhancing existing datasets to meet business needs across the enterprise. We are looking for talented individuals who are passionate about innovative technology, data warehousing and are eager to contribute to data engineering. . Position Overview Assess client needs and convert business requirements into business intelligence (BI) solutions roadmap relating to complex issues involving long-term or multi-work streams. Analyze technical issues and questions identifying data needs and delivery mechanisms Implement data structures using best practices in data modeling, ETL/ELT processes, Spark, Scala, SQL, database, and OLAP technologies Manage overall development cycle, driving best practices and ensuring development of high quality code for common assets and framework components Develop test-driven solutions and provide technical guidance and heavily contribute to a team of high caliber Data Engineers by developing test-driven solutions and BI Applications that can be deployed quickly and in an automated fashion. Manage and execute against agile plans and set deadlines based on client, business, and technical requirements Drive resolution of technology roadblocks including code, infrastructure, build, deployment, and operations Ensure all code adheres to development & security standards About you 4 year degree or equivalent experience 5+ years of software development experience preferably in data engineering/Hadoop development (Hive, Spark etc.) Hands on Experience in Object Oriented or functional programming such as Scala / Java / Python Knowledge or experience with a variety of database technologies (Postgres, Cassandra, SQL Server) Knowledge with design of data integration using API and streaming technologies (Kafka) as well as ETL and other data Integration patterns Experience with cloud platforms like Google Cloud, AWS, or Azure. Hands on Experience on BigQuery will be an added advantage Good understanding of distributed storage(HDFS, Google Cloud Storage, Amazon S3) and processing(Spark, Google Dataproc, Amazon EMR or Databricks) Experience with CI/CD toolchain (Drone, Jenkins, Vela, Kubernetes) a plus Familiarity with data warehousing concepts and technologies. Maintains technical knowledge within areas of expertise Constant learner and team player who enjoys solving tech challenges with global team. Hands on experience in building complex data pipelines and flow optimizations Be able to understand the data, draw insights and make recommendations and be able to identify any data quality issues upfront Experience with test-driven development and software test automation Follow best coding practices & engineering guidelines as prescribed Strong written and verbal communication skills with the ability to present complex technical information in a clear and concise manner to variety of audiences Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 3 weeks ago
4.0 - 9.0 years
9 - 13 Lacs
Bengaluru
Work from Office
About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Pyramid overview A role with Target Data Science & Engineering means the chance to help develop and manage state of the art predictive algorithms that use data at scale to automate and optimize decisions at scale. Whether you join our Statistics, Optimization or Machine Learning teams, you ll be challenged to harness Target s impressive data breadth to build the algorithms that power solutions our partners in Marketing, Supply Chain Optimization, Network Security and Personalization rely on. Position Overview As a Senior Engineer on the Search Team , you serve as a specialist in the engineering team that supports the product. You help develop and gain insight in the application architecture. You can distill an abstract architecture into concrete design and influence the implementation. You show expertise in applying the appropriate software engineering patterns to build robust and scalable systems. You are an expert in programming and apply your skills in developing the product. You have the skills to design and implement the architecture on your own, but choose to influence your fellow engineers by proposing software designs, providing feedback on software designs and/or implementation. You leverage data science in solving complex business problems. You make decisions based on data. You show good problem solving skills and can help the team in triaging operational issues. You leverage your expertise in eliminating repeat occurrences. About You 4-year degree in Quantitative disciplines (Science, Technology, Engineering, Mathematics) or equivalent experience Experience with Search Engines like SOLR and Elastic Search Strong hands-on programming skills in Java, Kotlin, Micronaut, Python, Experience on Pyspark, SQL, Hadoop/Hive is added advantage Experience on streaming systems like Kakfa. Experience on Kafka Streams is added advantage. Experience in MLOps is added advantage Experience in Data Engineering is added advantage Strong analytical thinking skills with an ability to creatively solve business problems, innovating new approaches where required Able to produce reasonable documents/narrative suggesting actionable insights Self-driven and results oriented Strong team player with ability to collaborate effectively across geographies/time zones Know More About Us here: Life at Target - https://india.target.com/ Benefits - https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 3 weeks ago
3.0 - 5.0 years
10 - 14 Lacs
Pune
Work from Office
: Job TitleGCP Data Engineer, AS LocationPune, India Corporate TitleAssociate Role Description An Engineer is responsible for designing and developing entire engineering solutions to accomplish business goals. Key responsibilities of this role include ensuring that solutions are well architected, with maintainability and ease of testing built in from the outset, and that they can be integrated successfully into the end-to-end business process flow. They will have gained significant experience through multiple implementations and have begun to develop both depth and breadth in several engineering competencies. They have extensive knowledge of design and architectural patterns.They will provide engineering thought leadership within their teams and will play a role in mentoring and coaching of less experienced engineers. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Design, develop and maintain data pipelines using Python and SQL programming language on GCP. Experience in Agile methodologies, ETL, ELT, Data movement and Data processing skills. Work with Cloud Composer to manage and process batch data jobs efficiently. Develop and optimize complex SQL queries for data analysis, extraction, and transformation. Develop and deploy google cloud services using Terraform. Implement CI CD pipeline using GitHub Action Consume and Hosting REST API using Python. Monitor and troubleshoot data pipelines, resolving any issues in a timely manner. Ensure team collaboration using Jira, Confluence, and other tools. Ability to quickly learn new any existing technologies Strong problem-solving skills. Write advanced SQL and Python scripts. Certification on Professional Google Cloud Data engineer will be an added advantage. Your skills and experience 6+ years of IT experience, as a hands-on technologist. Proficient in Python for data engineering. Proficient in SQL. Hands on experience on GCP Cloud Composer, Data Flow, Big Query, Cloud Function, Cloud Run and well to have GKE Hands on experience in REST API hosting and consumptions. Proficient in Terraform/ Hashicorp. Experienced in GitHub and Git Actions Experienced in CI-CD Experience in automating ETL testing using python and SQL. Good to have API knowledge Good to have Bit Bucket How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs
Posted 3 weeks ago
3.0 - 5.0 years
32 - 37 Lacs
Mumbai
Work from Office
: Job TitleLead Business Analyst, AVP Location: Mumbai, India Role Description As a BA you are expected to design and deliver on critical senior management dashboards and analytics using tools such as Tableau, Power BI etc. These management packs should enable management to make timely decisions for their respective businesses and create a sound foundation for the analytics. You will need to collaborate closely with senior business managers, data engineers and stakeholders from other teams to comprehend requirements and translate them into visually pleasing dashboards and reports. You will play a crucial role in analyzing business data and generating valuable insights for other strategic ad hoc exercises. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Collaborate with business user, managers to gather requirements, and comprehend business needs to design optimal solutions. Perform ad hoc data analysis as per business needs to generate reports, visualizations, and presentations helping strategic decision making. You will be responsible for sourcing information from multiple sources, build a robust data pipeline model. To be able work on large and complex data sets to produce useful insights. Perform audit checks ensuring integrity and accuracy across all spectrums before implementing findings. Ensure timely refresh to provide most updated information in dashboards/reports. Identifying opportunities for process improvements and optimization based on data insights. Communicate project status updates and recommendations. Your skills and experience Bachelors degree in computer science, IT, Business Administration or related field Minimum of 5 years of experience in visual reporting development, including hands-on development of analytics dashboards and working with complex data sets Minimum of 3 years of Tableau, power BI or any other BI tool. Excellent Microsoft Office skills including advanced Excel skills . Comprehensive understanding of data visualization best practices Experience with data analysis, modeling, and ETL processes is advantageous. Excellent knowledge of database concepts and extensive hands-on experience working with SQL Strong analytical, quantitative, problem solving and organizational skills. Attention to detail and ability to coordinate multiple tasks, set priorities and meet deadlines. Excellent communication and writing skills. How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs
Posted 3 weeks ago
4.0 - 6.0 years
12 - 16 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_2441_JOB Date Opened 21/03/2025 Industry IT Services Job Type Work Experience 4-6 years Job Title Data engineer with Gen Ai City Chennai Province Tamil Nadu Country India Postal Code 600001 Number of Positions 1 We are seeking a skilled Data Engineer who can function as a Data Architect, designing scalable data pipelines, table structures, and ETL workflows. The ideal candidate will be responsible for recommending cost-effective and high-performance data architecture solutions, collaborating with cross-functional teams to enable efficient analytics and data science initiatives. Key Responsibilities: Design and implement ETL workflows, data pipelines, and table structures to support business analytics and data science. Optimize data storage, retrieval, and processing for cost-efficiency and high performance. Collaborate with Analytics and Data Science teams for feature engineering and KPI computations. Develop and maintain data models for structured and unstructured data. Ensure data quality, integrity, and security across systems. Work with cloud platforms (AWS/ Azure/ GCP) to design and manage scalable data architectures. Technical Skills Required: SQL & Python Strong proficiency in writing optimized queries and scripts. PySpark Hands-on experience with distributed data processing. Cloud Technologies (AWS/ Azure/ GCP) Experience with cloud-based data solutions. Spark & Airflow Experience with big data frameworks and workflow orchestration. Gen AI (Preferred) Exposure to generative AI applications is a plus. Preferred Qualifications: Experience in data modeling, ETL optimization, and performance tuning. Strong problem-solving skills and ability to work in a fast-paced environment. Prior experience working with large-scale data processing. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
7.0 - 9.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_2162_JOB Date Opened 15/03/2024 Industry Technology Job Type Work Experience 7-9 years Job Title Sr Data Engineer City Bangalore Province Karnataka Country India Postal Code 560004 Number of Positions 5 Mandatory Skills: Microsoft Azure, Hadoop, Spark, Databricks, Airflow, Kafka, Py spark RequirmentsExperience working with distributed technology tools for developing Batch and Streaming pipelines using. SQL, Spark, Python Airflow Scala Kafka Experience in Cloud Computing, e.g., AWS, GCP, Azure, etc. Able to quickly pick up new programming languages, technologies, and frameworks. Strong skills building positive relationships across Product and Engineering. Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders Experience with creating/ configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs, build Docker images, etc. Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture Experience working with Data platforms, including EMR, Airflow, Data bricks (Data Engineering & Delta Lake components) Experience working in Agile and Scrum development process. Experience in EMR/ EC2, Data bricks etc. Experience working with Data warehousing tools, including SQL database, Presto, and Snowflake Experience architecting data product in Streaming, Server less and Microservices Architecture and platform. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
6.0 - 10.0 years
3 - 7 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_2199_JOB Date Opened 15/04/2024 Industry Technology Job Type Work Experience 6-10 years Job Title Sr Data Engineer City Chennai Province Tamil Nadu Country India Postal Code 600004 Number of Positions 4 Strong experience in Python Good experience in Databricks Experience working in AWS/Azure Cloud Platform. Experience working with REST APIs and services, messaging and event technologies. Experience with ETL or building Data Pipeline tools Experience with streaming platforms such as Kafka. Demonstrated experience working with large and complex data sets. Ability to document data pipeline architecture and design Experience in Airflow is nice to have To build complex Deltalake check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
3.0 - 5.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Job Information Job Opening ID ZR_1673_JOB Date Opened 20/12/2022 Industry Technology Job Type Work Experience 3-5 years Job Title Senior DevOps Engineer City Hyderabad Province Telangana Country India Postal Code 500001 Number of Positions 4 Roles & Responsibilities: 3+ years of working experience in data engineering. Hands-on keyboard' AWS implementation experience across a broad range of AWS services. Must have in depth AWS development experience (Containerization - Docker, Amazon EKS, Lambda, EC2, S3, Amazon DocumentDB, PostgreSQL) Strong knowledge of DevOps and CI/CD pipeline (GitHub, Jenkins, Artifactory) Scripting capability and the ability to develop AWS environments as code Hands-on AWS experience with at least 1 implementation (preferred in an Enterprise scale environment) Experience with core AWS platform architecture, including areas such asOrganizations, Account Design, VPC, Subnet, segmentation strategies. Backup and Disaster Recovery approach and design Environment and application automation CloudFormation and third-party automation approach/strategy Network connectivity, Direct Connect and VPN AWS Cost Management and Optimization Skilled experience in Python libraries (NumPy, Pandas dataframe) check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
6.0 - 10.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_2470_JOB Date Opened 03/05/2025 Industry IT Services Job Type Work Experience 6-10 years Job Title Sr. Data Engineer City Bangalore South Province Karnataka Country India Postal Code 560050 Number of Positions 1 Were looking for an experienced Senior Data Engineer to lead the design and development of scalable data solutions at our company. The ideal candidate will have extensive hands-on experience in data warehousing, ETL/ELT architecture, and cloud platforms like AWS, Azure, or GCP. You will work closely with both technical and business teams, mentoring engineers while driving data quality, security, and performance optimization. Responsibilities: Lead the design of data warehouses, lakes, and ETL workflows. Collaborate with teams to gather requirements and build scalable solutions. Ensure data governance, security, and optimal performance of systems. Mentor junior engineers and drive end-to-end project delivery.: 6+ years of experience in data engineering, including at least 2 full-cycle datawarehouse projects. Strong skills in SQL, ETL tools (e.g., Pentaho, dbt), and cloud platforms. Expertise in big data tools (e.g., Apache Spark, Kafka). Excellent communication skills and leadership abilities.PreferredExperience with workflow orchestration tools (e.g., Airflow), real-time data,and DataOps practices. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
5.0 - 8.0 years
2 - 6 Lacs
Pune
Work from Office
Job Information Job Opening ID ZR_2098_JOB Date Opened 13/01/2024 Industry Technology Job Type Contract Work Experience 5-8 years Job Title DCT Data Engineer City Pune City Province Maharashtra Country India Postal Code 411001 Number of Positions 4 LocationsPune, Bangalore, Indore Work modeWork from Office Informatica data quality - idq Azure databricks Azure data lake Azure Data Factory Api integration check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
5.0 - 8.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_1628_JOB Date Opened 09/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title Data Engineer City Bangalore Province Karnataka Country India Postal Code 560001 Number of Positions 4 Roles and Responsibilities: 4+ years of experience as a data developer using Python Knowledge in Spark, PySpark preferable but not mandatory Azure Cloud experience (preferred) Alternate Cloud experience is fine preferred experience in Azure platform including Azure data Lake, data Bricks, data Factory Working Knowledge on different file formats such as JSON, Parquet, CSV, etc. Familiarity with data encryption, data masking Database experience in SQL Server is preferable preferred experience in NoSQL databases like MongoDB Team player, reliable, self-motivated, and self-disciplined check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
10.0 - 15.0 years
12 - 17 Lacs
Hyderabad
Work from Office
At least 10+ years of proven experience in data analytics and data engineering Experience with SQL Server queries and PL-SQL Experience with Azure data factory Strong expertise in database development and Migration Experience in BODS reverse engineering Experience SAP IQ (Sybase) Database admin Experience in crystal report development Ensuring all work is carried out to the highest quality standards with appropriate detailed documentation Good communication skills, proactive and a team player English language skills in speaking and writing
Posted 3 weeks ago
4.0 - 6.0 years
15 - 20 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
KPI Partners is seeking a highly skilled and experienced GenAI Engineer with a strong background in Data Engineering and Software Development to join our team. The ideal candidate will focus on enhancing our information retrieval and generation capabilities, with specific experience in Azure AI Search, data processing for RAG, multimodal data integration, and familiarity with Databricks. Key Responsibilities: Design, develop, and optimize Retrieval-Augmented Generation models to improve information retrieval and generation processes within our applications. Develop and maintain search solutions using Azure AI Search to ensure efficient and accurate information access Process and prepare data to support RAG workflows, ensuring data quality and relevance. Integrate and manage various data types (e.g., text, images) to enhance retrieval and generation capabilities. Work closely with cross-functional teams to integrate data into our existing retrieval eco-system, ensuring seamless functionality and performance. Ensure the scalability, reliability, and performance of data retrieval in production environments. Stay updated with the latest advancements in AI, ML, and data engineering to drive innovation and maintain a competitive edge. What we’re looking for: Master’s degree in Data Science or a related field is preferred. Approximately 8 years of experience in Data Science, MLOps, and Data Engineering Proven experience in AI and ML solution implementation, particularly in semiconductor manufacturing. Proficiency in Python Proven experience in data engineering and software development, with a focus on building and deploying RAG pipelines or similar information retrieval systems. Familiarity with processing multimodal data (e.g., text, images) for retrieval and generation tasks. Strong understanding of database systems (SQL and NoSQL) and data warehousing solutions. Proficiency in Azure AI, Databricks, and other relevant tools.
Posted 3 weeks ago
4.0 - 7.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Role Description: As part of the cybersecurity organization, In this vital role you will be responsible for designing, building, and maintaining data infrastructure to support data-driven decision-making. This role involves working with large datasets, developing reports, executing data governance initiatives, and ensuring data is accessible, reliable, and efficiently managed. The role sits at the intersection of data infrastructure and business insight delivery, requiring the Data Engineer to design and build robust data pipelines while also translating data into meaningful visualizations for stakeholders across the organization. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture, ETL processes, and cybersecurity data frameworks. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in design and development of the data pipeline. Build data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Develop and maintain interactive dashboards and reports using tools like Tableau, ensuring data accuracy and usability Schedule and manage workflows the ensure pipelines run on schedule and are monitored for failures. Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Collaborate and communicate effectively with product teams. Collaborate with data scientists to develop pipelines that meet dynamic business needs. Share and discuss findings with team members practicing SAFe Agile delivery model. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The Data engineer professional we seek is one with these qualifications. Basic Qualifications: Masters degree and 1 to 3 years of experience of Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: Hands on experience with data practices, technologies, and platforms, such as Databricks, Python, GitLab, LucidChart, etc. Hands-on experience with data visualization and dashboarding toolsTableau, Power BI, or similar is a plus Proficiency in data analysis tools (e.g. SQL) and experience with data sourcing tools Excellent problem-solving skills and the ability to work with large, complex datasets Understanding of data governance frameworks, tools, and best practices Knowledge of and experience with data standards (FAIR) and protection regulations and compliance requirements (e.g., GDPR, CCPA) Good-to-Have Skills: Experience with ETL tools and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, cloud data platforms Experience working in Product team's environment Experience working in an Agile environment Professional Certifications: AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Initiative to explore alternate technology and approaches to solving problems Skilled in breaking down problems, documenting problem statements, and estimating efforts Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to handle multiple priorities successfully Team-oriented, with a focus on achieving team goals
Posted 3 weeks ago
6.0 - 11.0 years
8 - 13 Lacs
Hyderabad
Work from Office
We are looking for a Data Engineer with experience in data warehouse projects, strong expertise in Snowflake , and hands-on knowledge of Azure Data Factory (ADF) and dbt (Data Build Tool). Proficiency in Python scripting will be an added advantage. Key Responsibilities: Design, develop, and optimize data pipelines and ETL processes for data warehousing projects. Work extensively with Snowflake, ensuring efficient data modeling, and query optimization. Develop and manage data workflows using Azure Data Factory (ADF) for seamless data integration. Implement data transformations, testing, and documentation using dbt. Collaborate with cross-functional teams to ensure data accuracy, consistency, and security. Troubleshoot data-related issues. (Optional) Utilize Python for scripting, automation, and data processing tasks. Required Skills & Qualifications: Experience in Data Warehousing with a strong understanding of best practices. Hands-on experience with Snowflake (Data Modeling, Query Optimization). Proficiency in Azure Data Factory (ADF) for data pipeline development. Strong working knowledge of dbt (Data Build Tool) for data transformations. (Optional) Experience in Python scripting for automation and data manipulation. Good understanding of SQL and query optimization techniques. Experience in cloud-based data solutions (Azure). Strong problem-solving skills and ability to work in a fast-paced environment. Experience with CI/CD pipelines for data engineering. Why Join Us Opportunity to work on cutting-edge data engineering projects. Work with a highly skilled and collaborative team. Exposure to modern cloud-based data solutions. ------ ------Developer / Software Engineer - One to Three Years,Snowflake - One to Three Years------PSP Defined SCU in Solution Architect
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6462 Jobs | Ahmedabad
Amazon
6351 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane