Home
Jobs

10660 Etl Jobs - Page 25

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

10 - 20 Lacs

Chennai, Coimbatore, Bengaluru

Work from Office

Naukri logo

Requirement Name / Title: Dataiku Data Engineering Developer/Lead Experience: 6+ Location: Chennai/ Bangalore Notice: Immediate to maximum 15 days Mandatory Skills DataIku for ETL operations and preferably other ETL tools like Informatica. Good in Python Coding, SQL, GIT Proven experience as a Data Engineer, Data Integration, Data Analyst. Preferred Skills:Banking Domain exposure and AI/ML, Data Science exposure Key Roles & Responsibilities 6+ years of experience, including 2+ years of experience in delivering projects in DataIku platforms. Proficiency in configuring and optimizing Dataikus architecture, including data connections, security settings and workflow management. Hands-on experience with Dataiku recipes, Designer nodes, API nodes & Automation nodes with deployment. Expertise in python scripting, automation and development of custom workflows in Dataiku Collaborate with data analyst, business stakeholders and client to gather and understand the requirement. To contribute to the developments in DataIku environment to apply data integration with given logic to fulfil Bank Regulatory requirement and other customer requirement. Gather, analyse and interpret requirement specifications received directly from the client. Ability to work independently and effectively in a fast-paced, dynamic environment. Strong analytical and problem-solving skills. Familiarity with agile development methodologies. Participate in the CR/Production deployment implementation process through Azure DevOps.

Posted 2 days ago

Apply

2.0 - 5.0 years

5 - 13 Lacs

Hyderabad

Hybrid

Naukri logo

Snaplogic Knowledge on both the tools – MuleSoft and SnapLogic. REST APIs, SOAP Webservices, Authentication conceptsETL architecture and designinterfaces in Snaplogic Designer, Manager and Dashboard.SQL,PL/SQL.JSON path expressions, Python scripting.

Posted 2 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Gurgaon/ Bangalore, India AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable - enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained industrious advantage. Our Chief Data Office also known as our Innovation, Data Intelligence & Analytics team (IDA) is focused on driving innovation through optimizing how we leverage data to drive strategy and create a new business model - disrupting the insurance market. As we develop an enterprise-wide data and digital strategy that moves us toward greater focus on the use of data and data-driven insights, we are seeking an Assistant Scientist for our Data Engineering team. The role will support the team’s efforts towards creating, enhancing, and stabilizing the Enterprise data lake through the development of the data pipelines. This role requires a person who is a team player and can work well with team members from other disciplines to deliver data in an efficient and strategic manner. What You’ll Be DOING What will your essential responsibilities include? Relevant years of extensive work experience in various data engineering & modeling techniques (relational, data warehouse, semi-structured, etc.), application development, advanced data querying skills. Relevant years of programming experience using Databricks. Relevant years of experience using Microsoft Azure suite of products (ADF, synapse and ADLS). Solid knowledge on network and firewall concepts. Provide technical guidance and mentorship to junior and mid-level developers. Lead design discussions and code reviews to ensure high-quality development. Utilize SQL to query and manipulate data from various sources, ensuring data accuracy and integrity. Develop and maintain data pipelines using DBT, Databricks to facilitate data processing and transformation. Skilled professional with extensive experience in SQL performance tuning and a proven track record in writing complex SQL queries. Demonstrated ability to optimize database performance, enhance data retrieval efficiency, and ensure the integrity and security of data across various applications. Collaborate with project managers to define project scope, timelines, and deliverables. Oversee the development process, ensuring projects are delivered on time and meet quality standards. Assist in deploying and managing data solutions on Azure Cloud, ensuring optimal performance and security. Create and maintain documentation for data processes, data models, and system architecture. Participate in data quality checks and troubleshooting to resolve data-related issues. Maintain integrity and quality across all pipelines and environments. Participate in the architectural design and decision-making processes for new projects and enhancements. Bring ideas to the tables that help to streamline and rationalize BTL jobs. Leads small team strategic partner/vendor team members. Works with business users and tries to bring closure to the request. You will report to Lead Scientist. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities Proficiency in SQL & experience with DBT is essential. Bachelor’s degree in computer science, Mathematics, Statistics, Finance, related technical field, or equivalent work experience. Proficiency in SQL for database querying and management. Excellent programming skills in Python, with experience in data manipulation and analysis. Must have hands-on experience in designing and developing ETL Pipelines. Relevant years of exposure and good proficiency in data warehousing concepts. Proficient in SQL and database Design concepts. Good knowledge of unit testing, documentation - low-level design's Desired Skills And Abilities Understanding of the Azure cloud computing platform, specifically Azure Synapse and Azure Data Lake Storage (ADLS), is a plus. Experience with Databricks, Azure Data Factory (ADF), and PySpark is must. A passion for data and experience in a data-driven organizational environment. A commitment to excellence and a genuine care for both your work and the overall mission of the organization. Knowledge of GitHub and build management practices is an added advantage. Who WE are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What we OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and enables business growth and is critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most inclusive workforce possible, and create a culture where everyone can bring their full selves to work and reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe. Robust support for Flexible Working Arrangements Enhanced family-friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides competitive compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far-reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability. Show more Show less

Posted 2 days ago

Apply

3.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Business Intelligence (BI) Developer Experience: 3 to 6 Years Location: Chennai / Pune / Gurugram (Hybrid/On-site options available) Job Summary: We are looking for a skilled and motivated BI Developer with 3–6 years of experience to join our analytics team. The ideal candidate will have strong expertise in Tableau and AWS cloud services , and be passionate about transforming complex data into actionable insights to support strategic decision-making. Key Responsibilities: Design, develop, and maintain interactive Tableau dashboards and reports based on business requirements. Collaborate with business stakeholders to gather reporting requirements and translate them into technical specifications. Build and optimize ETL pipelines using AWS services (e.g., AWS Glue, Redshift, S3, Athena, Lambda). Perform data modeling and data validation to ensure high quality and accuracy of BI solutions. Work with large datasets, ensure performance tuning of reports and dashboards. Ensure data security and compliance as per company and regulatory standards. Troubleshoot and resolve issues related to BI tools, data, and performance. Participate in code reviews and follow best practices for dashboard design and development. Required Skills & Qualifications: 3 to 6 years of hands-on experience as a BI Developer or in a similar data analytics role. Strong proficiency in Tableau – dashboard development, storytelling, visual best practices. Experience with AWS services such as Redshift, S3, Glue, Athena, Lambda , etc. Solid understanding of SQL and relational database concepts. Experience working with cloud data warehouses (e.g., Amazon Redshift, Snowflake on AWS). Knowledge of ETL processes , data modeling (Star/Snowflake schema), and performance tuning. Strong problem-solving skills and attention to detail. Ability to work independently and collaboratively in a fast-paced environment. Show more Show less

Posted 2 days ago

Apply

3.0 - 6.0 years

15 - 30 Lacs

Hyderabad

Remote

Naukri logo

Role & responsibilities We are seeking a skilled Data Engineer to join our team and enhance the efficiency and accuracy of health claim fraud detection. This role involves designing, building, and optimizing data pipelines, integrating AI/ML models in Vertex AI, and improving data processing workflows to detect fraudulent claims faster and more accurately. Qualifications: Bachelors/Masters degree in Computer Science, Data Engineering, or a related field. 3+ years of experience in data engineering, preferably in the healthcare or financial sector. Strong experience with Google Cloud (GCP) services: Vertex AI, BigQuery, Dataflow, Pub/Sub, Dataproc. Expertise in SQL and Python for data processing and transformation. Experience with ML model deployment and monitoring in Vertex AI. Knowledge of ETL pipelines, data governance, and security best practices. Familiarity with healthcare data standards (HL7, FHIR) and compliance frameworks. Experience with Apache Beam, Spark, or Kafka is a plus. Preferred Qualifications: Experience in fraud detection models using AI/ML. Hands-on experience with ML Ops in GCP. Strong problem-solving skills and the ability to work in a cross-functional team

Posted 2 days ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Company Description Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance. We are an experienced team of industry experts dedicated to helping readers make smart decisions and choose the right products with ease. Marketplace boasts decades of experience across dozens of geographies and teams. The team brings rich industry knowledge to Marketplace’s global coverage of consumer credit, debt, health, home improvement, banking, investing, credit cards, small business, education, insurance, loans, real estate and travel. Job Description The Role We’re hiring a Data Engineering Lead to help scale and guide a growing team of data engineers. This role is ideal for someone who enjoys solving technical challenges hands-on while also shaping engineering best practices, coaching others, and helping cross-functional teams deliver data products with clarity and speed. You’ll manage a small team of ICs responsible for building and maintaining pipelines that support reporting, analytics, and machine learning use cases. You’ll be expected to drive engineering excellence — from code quality to deployment hygiene — and play a key role in sprint planning, architectural discussions, and stakeholder collaboration. This is a critical leadership role as our data organization expands to meet growing demand across media performance, optimization, customer insights, and advanced analytics. What You’ll Do Lead and grow a team of data engineers working across ETL/ELT, data warehousing, and ML-enablement Own team delivery across sprints, including planning, prioritization, QA, and stakeholder communication Set and enforce strong engineering practices around code reviews, testing, observability, and documentation Collaborate cross-functionally with Analytics, BI, Revenue Operations, and business stakeholders in Marketing and Sales Guide technical architecture decisions for our pipelines on GCP (BigQuery, GCS, Composer) Model and transform data using dbt and SQL, supporting reporting, attribution, and optimization needs Ensure data security, compliance, and scalability — especially around first-party customer data Mentor junior engineers through code reviews, pairing, and technical roadmap discussions What You Bring 6+ years of experience in data engineering, including 2+ years of people management or formal team leadership Strong technical background with Python, Spark, Kafka, and orchestration tools like Airflow Deep experience working in GCP, especially BigQuery, GCS, and Composer Strong SQL skills and familiarity with DBT for modeling and documentation Clear understanding of data privacy and governance, including how to safely manage and segment first-party data Experience working in agile environments, including sprint planning and ticket scoping Excellent communication skills and proven ability to work cross-functionally across global teams. Nice to have Experience leading data engineering teams in digital media or performance marketing environments Familiarity with data from Google Ads, Meta, TikTok, Taboola, Outbrain, and Google Analytics (GA4) Exposure to BI tools like Tableau or Looker Experience collaborating with data scientists on ML workflows and experimentation platforms Knowledge of data contracts, schema versioning, or platform ownership patterns Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves Show more Show less

Posted 2 days ago

Apply

2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Role Profile: Junior Backend Developer The back-end developer will join a new team, established to develop and drive the technology and automation roadmap for EPIC Group. You will report directly to the Managing Director who is based in UK. This is an exciting role that will provide the opportunity to work on projects that will have a direct impact on creating new revenue opportunities for the Group, and driving operational efficiencies that grow profitability. The automation initiatives will also provide EPIC with the foundations for scalable growth in the future. Role description and responsibility overview End-to-end development of multiple web applications, from design to development, testing and maintenance of tools and applications Project planning the end-to-end development cycle and reporting regularly to your line manager with progress updates Reviewing current business processes and technology and identifying opportunities for efficiency gains Working with business stakeholders to scope projects end-to-end, translating business requirements to technical requirements Working with third parties to integrate their off-the-shelf solutions into internal systems and workflows Integration (API) development between systems Create and maintain software documentation Maintain, expand, and scale our website Remain knowledgeable of emerging technologies/industry trends and apply them into operations and activities User authentication and authorization between multiple systems, servers, and environments Setting up caching, queuing, job scheduling and notifications Database development and maintenance where required . Candidate profile Non-technical Strong command of English language and be an effective communicator Strong logic and problem-solving skills Strong numeracy and analytical skills Shift timings: 1330 hours to 2200 hours or 1130 hours to 2000 hours for female employees Strong work ethic and delivers on tasks with a sense of urgency Technical Qualification: Bachelor's degree in computer science or a related field. Minimum of 2 years of experience working with backend technologies Expert-level programming and debugging skills in PHP, Codeigniter, Laravel, jQuery, JSON Understanding the fully synchronous behaviour of PHP Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 Knowledge of object-oriented PHP programming Understanding accessibility and security compliance Integration: Knowledge of developing and implementing modern web services using RESTful APIs Databases: Experience working with databases like MySQL, PostgreSQL, MongoDB. Experience with data engineering, ETL processes, database development and maintenance Version control: Proficient in version control systems like GitHub or Bitbucket to ensure efficient collaboration with other team members Security: Authentication and authorisation between multiple systems, servers and environments Testing: Proficient in testing, debugging and troubleshooting issues effectively Location / Hours: Noida. Shift timings: 13:30 - 22:00 / 11:30 - 20:00 on a [weekly rotating basis] Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Our team helps clients navigate various analytics applications to get the most value out of their technology investment and foster confidence in their business intelligence. As part of our team, you'll help our clients implement enterprise content and data management applications that improve operational effectiveness and provide impactful data analytics and insights. To really stand out and make us fit for the future in a constantly changing world, each and every one of us at PwC needs to be a purpose-led and values-driven leader at every level. To help us achieve this we have the PwC Professional; our global leadership development framework. It gives us a single set of expectations across our lines, geographies and career paths, and provides transparency on the skills we need as individuals to be successful and progress in our careers, now and in the future. Responsibilities As an Associate, you'll work as part of a team of problem solvers, helping to solve complex business issues from strategy to execution. PwC Professional skills and responsibilities for this management level include but are not limited to: Invite and give in the moment feedback in a constructive manner. Share and collaborate effectively with others. Identify and make suggestions for improvements when problems and/or opportunities arise. Handle, manipulate and analyse data and information responsibly. Follow risk management and compliance procedures. Keep up-to-date with developments in area of specialism. Communicate confidently in a clear, concise and articulate manner - verbally and in the materials I produce. Build and maintain an internal and external network. Seek opportunities to learn about how PwC works as a global network of firms. Uphold the firm's code of ethics and business conduct. Job Summary A career in our Analytics Data Assurance practice, within the Risk and Regulatory vertical of Advisory practice. It will provide you with the opportunity to assist clients in developing analytics and technology solutions that help them detect, monitor, and predict risk. Help business leaders solve business problems using the best of data analytics tools and technologies. You would also assist the practice grow in the different US. Australia and UK markets, build professional relationships and communicate effectively with stakeholders. Job Description As an Associate, you’ll work as part of a team of problem solvers with extensive consulting and industry experience, helping our clients solve their complex business issues from strategy to execution. Specific responsibilities include but are not limited to Provide support to our clients with the technology consulting solutions Work on data analysis provide insights using tools like SQL, Tableau, Power BI, Excel Data preparation and cleansing of raw data for analysis using tools like Alteryx, Python Work with global teams, attending the calls, asking relevant questions, providing status reporting to different stakeholders General Ability and interest to learn new technologies Deliver the client technological needs with best quality Communicating clearly when writing, speaking and/or presenting to project stakeholders Understand the client need, translate it using technologies Must Have Strong analytical and problem solving skills Knowledge of SQL/ Python programming skills Project experience on any ETL and/or data visualization tools like - Alteryx, Tableau or Power BI Good communications skills Good To Have Accounting experience Cloud Experience Experience on Risk Management Consulting experience Preferred Qualifications: B.Tech (B.E), MCA from a reputed college/ University Show more Show less

Posted 2 days ago

Apply

13.0 - 18.0 years

44 - 48 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

About KPI Partners. KPI Partners is a leading provider of data-driven insights and innovative analytics solutions. We strive to empower organizations to harness the full potential of their data, driving informed decision-making and business success. We are seeking an enthusiastic and experienced professional to join our dynamic team as an Associate Director / Director in Data Engineering & Modeling. We are looking for a highly skilled and motivated Associate Director/ Director – Data Engineering & Solution Architecture to support the strategic delivery of modern data platforms and enterprise analytics solutions. This is a hands-on leadership role that bridges technology and business, helping design, develop, and operationalize scalable cloud-based data ecosystems. You will work closely with client stakeholders, internal delivery teams, and practice leadership to drive the architecture, implementation, and best practices across key initiatives. Key Responsibilities Solution Design & Architecture : Collaborate on designing robust, secure, and cost-efficient data architectures using cloud-native platforms such as Databricks, Snowflake, Azure Data Services, AWS, and Incorta. Data Engineering Leadership : Oversee the development of scalable ETL/ELT pipelines using ADF, Airflow, dbt, PySpark, and SQL, with an emphasis on automation, error handling, and auditing. Data Modeling & Integration : Design data models (star, snowflake, canonical), resolve dimensional hierarchies, and implement efficient join strategies. API-based Data Sourcing : Work with REST APIs for data acquisition — manage pagination, throttling, authentication, and schema evolution. Platform Delivery : Support end-to-end project lifecycle — from requirement analysis and PoCs to development, deployment, and handover. CI/CD & DevOps Enablement : Implement and manage CI/CD workflows using Git, Azure DevOps, and related tools to enforce quality and streamline deployments. Mentoring & Team Leadership : Mentor senior engineers and developers, conduct code reviews, and promote best practices across engagements. Client Engagement : Interact with clients to understand needs, propose solutions, resolve delivery issues, and maintain high satisfaction levels. Required Skills & Qualifications 14+ years of experience in Data Engineering, BI, or Solution Architecture roles. Strong hands-on expertise in one of the cloud like Azure, Databricks, Snowflake, and AWS (EMR). Proficiency in Python, SQL, and PySpark for large-scale data transformation. Proven skills in developing dynamic and reusable data pipelines (metadata-driven preferred). Strong grasp of data modeling principles and modern warehouse design. Experience with API integrations, including error handling and schema versioning. Ability to design modular and scalable solutions aligned with business goals. Solid communication and stakeholder management skills. Preferred Qualifications Exposure to data governance, data quality frameworks, and security best practices. Certifications in Azure Data Engineering, Databricks, or Snowflake are a plus. Experience working with Incorta and building materialized views or delta-based architectures. Experience working with enterprise ERP systems. Exposure leading data ingestion from Oracle Fusion ERP and other enterprise systems. What We Offer Opportunity to work on cutting-edge data transformation projects for global enterprises Mentorship from senior leaders and a clear path to Director-level roles Flexible work environment and a culture that values innovation, ownership, and growth Competitive compensation and professional development support

Posted 2 days ago

Apply

4.0 - 9.0 years

7 - 17 Lacs

Mumbai, Navi Mumbai, Mumbai (All Areas)

Work from Office

Naukri logo

Role & responsibilities Strong, hands-on proficiency with Snowflake: In-depth knowledge of Snowflake architecture, features (e.g., Snowpipe, Tasks, Streams, Time Travel, Zero-Copy Cloning). Experience in designing and implementing Snowflake data models (schemas, tables, views). Expertise in writing and optimizing complex SQL queries in Snowflake. Experience with data loading and unloading techniques in Snowflake. Solid experience with AWS Cloud services: Proficiency in using AWS S3 for data storage, staging, and as a landing zone for Snowflake. Experience with other relevant AWS services (e.g., IAM for security, Lambda for serverless processing, Glue for ETL - if applicable). Strong experience in designing and building ETL/ELT data pipelines.

Posted 2 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description When you attract people who have the DNA of pioneers and the DNA of explorers, you build a company of like-minded people who want to invent. And that’s what they think about when they get up in the morning: how are we going to work backwards from customers and build a great service or a great product” – Jeff Bezos Amazon.com’s success is built on a foundation of customer obsession. Have you ever thought about what it takes to successfully deliver millions of packages to Amazon customers seamlessly every day like a clock work? In order to make that happen, behind those millions of packages, billions of decision gets made by machines and humans. What is the accuracy of customer provided address? Do we know exact location of the address on Map? Is there a safe place? Can we make unattended delivery? Would signature be required? If the address is commercial property? Do we know open business hours of the address? What if customer is not home? Is there an alternate delivery address? Does customer have any special preference? What are other addresses that also have packages to be delivered on the same day? Are we optimizing delivery associate’s route? Does delivery associate know locality well enough? Is there an access code to get inside building? And the list simply goes on. At the core of all of it lies quality of underlying data that can help make those decisions in time. The person in this role will be a strong influencer who will ensure goal alignment with Technology, Operations, and Finance teams. This role will serve as the face of the organization to global stakeholders. This position requires a results-oriented, high-energy, dynamic individual with both stamina and mental quickness to be able to work and thrive in a fast-paced, high-growth global organization. Excellent communication skills and executive presence to get in front of VPs and SVPs across Amazon will be imperative. Key Strategic Objectives: Amazon is seeking an experienced leader to own the vision for quality improvement through global address management programs. As a Business Intelligence Engineer of Amazon last mile quality team, you will be responsible for shaping the strategy and direction of customer-facing products that are core to the customer experience. As a key member of the last mile leadership team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. You will partner closely with product and technology teams to define and build innovative and delightful experiences for customers. You must be highly analytical, able to work extremely effectively in a matrix organization, and have the ability to break complex problems down into steps that drive product development at Amazon speed. You will set the tempo for defect reduction through continuous improvement and drive accountability across multiple business units in order to deliver large scale high visibility/ high impact projects. You will lead by example to be just as passionate about operational performance and predictability as you will be about all other aspects of customer experience. The Successful Candidate Will Be Able To Effectively manage customer expectations and resolve conflicts that balance client and company needs. Develop process to effectively maintain and disseminate project information to stakeholders. Be successful in a delivery focused environment and determining the right processes to make the team successful. This opportunity requires excellent technical, problem solving, and communication skills. The candidate is not just a policy maker/spokesperson but drives to get things done. Possess superior analytical abilities and judgment. Use quantitative and qualitative data to prioritize and influence, show creativity, experimentation and innovation, and drive projects with urgency in this fast-paced environment. Partner with key stakeholders to develop the vision and strategy for customer experience on our platforms. Influence product roadmaps based on this strategy along with your teams. Support the scalable growth of the company by developing and enabling the success of the Operations leadership team. Serve as a role model for Amazon Leadership Principles inside and outside the organization Actively seek to implement and distribute best practices across the operation Devise and implement efficient and secure procedures for data management and analysis with attention to all technical aspects Create and enforce policies for effective data management Formulate management techniques for quality data collection to ensure adequacy, accuracy and legitimacy of data Establish rules and procedures for data sharing with upper management, external stakeholders etc. Basic Qualifications Knowledge of SQL and Excel Experience hiring and leading a high-performance team Knowledge of data engineering pipelines, cloud solutions, ETL management, databases, visualizations and analytical platforms Knowledge of methods for statistical inference (e.g. regression, experimental design, significance testing) Preferred Qualifications Knowledge of product experimentation (A/B testing) Knowledge of a scripting language (Python, R, etc.) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2974488 Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About The Role We are looking for a Senior Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Sr. Data Engineer who has experience architecting and implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by modeling and transforming data to achieve actionable business outcomes. The Sr. Data Engineer will create, troubleshoot and support ETL pipelines and the cloud infrastructure involved in the process, will be able to support the visualizations team. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals. Demonstrate deep technical and domain knowledge of relational and non-relation databases, Data Warehouses, Data lakes among other structured and unstructured storage options. Determine solutions that are best suited to develop a pipeline for a particular data source. Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development. Efficient in ETL/ELT development using Azure cloud services and Snowflake, Testing and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics delivery. Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders. Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability). Stay current with and adopt new tools and applications to ensure high quality and efficient solutions. Build cross-platform data strategy to aggregate multiple sources and process development datasets. Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation. Job Requirements Bachelor’s Degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred. 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment. 5+ years of experience with setting up and operating data pipelines using Python or SQL 5+ years of advanced SQL Programming: PL/SQL, T-SQL 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization. Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads. 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data. 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions. 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring. Strong analytical abilities and a strong intellectual curiosity In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design. Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration and teamwork skills & excellent written and verbal communications skills. Self-starter and motivated with ability to work in a fast-paced development environment. Agile experience highly desirable. Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools. Knowledge Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management). Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques. Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks. Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools. Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting. ADF, Databricks and Azure certification is a plus. Technologies we use: Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Tarento Tarento is a fast-growing technology consulting company headquartered in Stockholm, with a strong presence in India and clients across the globe. We specialize in digital transformation, product engineering, and enterprise solutions, working across diverse industries including retail, manufacturing, and healthcare. Our teams combine Nordic values with Indian expertise to deliver innovative, scalable, and high-impact solutions. We're proud to be recognized as a Great Place to Work , a testament to our inclusive culture, strong leadership, and commitment to employee well-being and growth. At Tarento, you’ll be part of a collaborative environment where ideas are valued, learning is continuous, and careers are built on passion and purpose. About The Role We are looking for a skilled Python + SQL Developer with 3–5 years of experience, strong technical expertise, excellent communication skills, and solid business understanding. This role involves developing robust data solutions and working closely with business stakeholders to turn requirements into impactful deliverables. Key Responsibilities Design, develop, and maintain efficient Python scripts and optimized SQL queries for ETL, data processing, and reporting tasks. Work with large datasets to ensure high data quality, integrity, and performance. Understand business requirements and translate them into clear, actionable technical solutions. Collaborate with cross-functional teams, including analysts, business users, and other developers. Document processes, follow coding standards, and contribute to continuous improvement initiatives. Communicate progress, challenges, and solutions effectively with both technical and non-technical audiences. Required Skills & Experience 3–5 years of hands-on experience with Python programming. Strong proficiency in SQL for complex data queries, joins, and performance tuning. Experience working with relational databases (e.g., MySQL, PostgreSQL, SQL Server) and large datasets. Good understanding of ETL workflows and data pipeline design. Strong communication and interpersonal skills to interact with business teams and stakeholders. Ability to understand and analyze business processes to deliver relevant technical solutions. Good To Have Exposure to cloud data tools (e.g., AWS RDS, Snowflake, or similar). Familiarity with data visualization tools or BI reporting. Experience in an Agile environment. Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Business Intelligence Analyst Talent Formula is a consulting firm that offers outsourced financial and accounting talent to Chartered Accounting firms worldwide. We are currently sourcing for a Global Accounting Firm's Gold Coast (Australia) Office, who are looking to expand their Business Intelligence team. As a BI Analyst you will be working across multiple business stakeholders, tech / IT teams and internal professional staff to develop assets with Power BI and other technologies. Must Have Requirements 4-5 years + relevant work experience in analytics, business intelligence or data science. Significant experience leveraging Power BI to benefit a range of industries and scenarios. Data Modelling DAX, M Code and Power Query (Master Level of at Least One) High-Level Knowledge of Best Practices Advanced ETL (Extract Transform Load) Skills Setting up Power BI Data Gateways SQL Some level of business acumen/experience. (Accounting desirable but not necessary) Practical Understanding Of The Below Concepts Is Highly Advantageous Complex Star Schema Models, ability to resolve and manage circular dependencies in complex datasets, understanding of bidirectional filtering/many to many relationships. DAX: Rolling Averages, YTD/QTR totals, dynamic time intelligence, summarising and joining tables at the DAX level (Not Power Query), Cumulative Calculations both in table and in measure. DAX Functions: SUMMARIZE, TOPN, ADDCOLUMNS, SELECTCOLUMNS, RANKX, VAR, SWITCH, ISERROR, IFERROR, COALESCE. Power Query: Parameters/Variables, Dynamic Queries, Query Foldings, Joins, Extract JSON etc. Row Level Security – Multiple Groups, or Multi Requirement Interaction. Power BI Service – Delivery Methods, Knowledge of Licence Requirements, Power BI “Apps”. Bookmarks to create interactive pop up menus and page variations. Ability to work around lacking real estate in reports. Import vs Direct Query Highly Valued Skills (Should Have Few Of The Below) Microsoft Azure/Data Factories ETL from API Endpoints Power Apps Development Power Automate Development Advanced SQL Financial Reporting / Accounting Skills – Custom Chart of Accounts In PBI Tableau – Would need to be high level experience of end-to-end delivery to be worthwhile. Paginated Reports Power BI Experience UI/UX and Design Skills Note: Candidates will be asked to provide a sample of their work in Power BI by providing the PBIX file of a project they have worked on (can be a passion project). How to apply? To be considered for this role, you must complete 3 steps: Apply to this job and upload your resume Complete the Skills Tests for this role you must follow the link below and complete the testing assessments. The first assessment is a Skills Test, to assess your technical ability and numerical reasoning. Complete the Psychometric Test for this role If you successfully clear the Skills Test, you will be redirected to a Psychometric Test to assess how you think and make decisions. To complete these tests, you must go to https://es.peoplogicaskills.com/es/quiz?testId=717c8fb1106c28c8 and complete the assessments . If you do not complete the assessments then you will not be considered for the role. Show more Show less

Posted 2 days ago

Apply

1.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Yext (NYSE: YEXT) is the leading digital presence platform for multi-location brands, with thousands of customers worldwide. With one central platform, brands can seamlessly deliver consistent, accurate, and engaging experiences and meaningfully connect with customers anywhere in the digital world. Our AI and machine learning technology powers the knowledge behind every customer engagement, which is only possible through our team of innovators and enthusiastic collaborators. Join us and experience firsthand why we are consistently recognized as a ‘Best Place to Work’ globally by industry leaders such as Built In, Fortune, and Great Place To Work®! Yext Services team works with Yext’s largest clients to deliver highly customized and integrated products! Our team includes Project Management, Creative, and Development resources that work together to deliver end-to-end solutions. We take pride in our ability to tackle a wide breadth and depth of challenges and keep pace with the ever-changing needs of our clients. We are looking for a Full Stack Software Engineer to help us build custom web and backend solutions for our enterprise clients. Our team features a Scrum development process with one-week sprints, peer code reviews and individual ownership of projects from kickoff to delivery. You will work alongside engineers from the top universities and tech companies in the world, hands-on with the code from day one. This role is fully on-site in our Hyderabad, India office. What You'll Do Build, modify and maintain web applications with HTML/CSS/JavaScript, ensuring compatibility across multiple browsers and platforms Build and maintain ETL jobs to help our customers store and organize their data in the Yext Knowledge Graph Advise engineering & business stakeholders on best practices, compatibility issues and tradeoffs Augment development tooling to improve software quality, iteration speed, and maintainability using modern technologies Bridge the gap between engineering and product, working equally well with Designers and Project Management Write clean, tested, and well-structured code What You Have BS or above in Computer Science, a related field, or similar college level education Comfortable with a variety of frontend technologies: HTML, CSS, and JavaScript Ability to easily move between backend and frontend technologies Openness to new technologies and creative solutions Comfortable working within a fast-paced high growth startup environment Bonus Points Good working experience with Full Stack (React, TypeScript, JavaScript, Node JS, Python, Go and/or Java) Strong understanding and working experience in Data Structures & Algorithms Experience in data structures like lists, arrays, hash tables, hash maps, stacks, queues, trees, heaps, and graphs. Review sorts, searches, graph traversals, recursion, and iterative algorithms Contribution to open-source projects 1-5 years of relevant software engineering experience Perks And Benefits At Yext, we take pride in our diverse workforce and prioritize creating an engaged and connected working environment. Our ambitious mission is to transform the enterprise with AI search, and we know that to achieve that, we need a global team of innovators, visionary thought leaders, and enthusiastic collaborators passionate about making a meaningful impact in the world and contributing to an extraordinary culture. Benefits We believe that people do their best when they feel their best — and to feel their best, they must be well-informed, fuelled, and rested. To ensure our employees are at their best, we offer a wide range of benefits and perks, including: Performance-Based Compensation: We offer an attractive bonus structure and stock options for eligible positions. Comprehensive Leave Package: Our leave package includes Paid Time Off (PTO), Parental Leave, Sick Leave, Casual Leave, Bereavement Leave, National Holidays, and Floating Holidays to ensure a healthy work-life balance. Health & Wellness Offerings: We provide medical insurance with 7L coverage, including enhanced parental and outpatient department (OPD) coverage for you, your spouse, two dependent children, and two parents (as applicable and subject to eligibility requirements). Relocation Benefits: We offer relocation assistance and an allowance to eligible candidates to help ease your transition. World-Class Office & Building Amenities: Our office has a top-notch infrastructure, including gaming rooms, a plush pantry, and breakout areas. Yext is committed to building an inclusive and diverse culture where every person is seen, heard, and valued. We believe in equal employment opportunity and welcome employees and applicants of all races, colors, ethnicities, religions, creeds, national origins, ancestries, genetics, sexes, pregnancy or childbirth, sexual orientations, genders (including gender identity or nonbinary or nonconformity and/or status as a trans individual), ages, physical or mental disabilities, citizenships, marital, parental and/or familial status, past, current or prospective service in the uniformed services, or any characteristic protected under applicable law. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. It is Yext’s policy to provide reasonable accommodations to people with disabilities as required by law. If you have a disability that requires an accommodation in completing this application, interviewing, or participating in the employee selection process, please complete this form. Show more Show less

Posted 2 days ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About The Role We are looking for a Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Data Engineer who has experience implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by transforming data to achieve actionable business outcomes. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options Determine solutions that are best suited to develop a pipeline for a particular data source Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance) Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability) Stay current with and adopt new tools and applications to ensure high quality and efficient solutions Build cross-platform data strategy to aggregate multiple sources and process development datasets Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation Job Requirements Bachelor’s degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred 3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment 3+ years of experience with setting up and operating data pipelines using Python or SQL 3+ years of advanced SQL Programming: PL/SQL, T-SQL 3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data 3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions 3+ years of experience in defining and enabling data quality standards for auditing, and monitoring Strong analytical abilities and a strong intellectual curiosity. In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration, teamwork skills, excellent written and verbal communications skills Self-starter and motivated with ability to work in a fast-paced development environment Agile experience highly desirable Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools Preferred Skills Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management) Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance) Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting ADF, Databricks and Azure certification is a plus Technologies we use : Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Join us as a Data Engineer We’re looking for someone to build effortless, digital first customer experiences to help simplify our organisation and keep our data safe and secure Day-to-day, you’ll develop innovative, data-driven solutions through data pipelines, modelling and ETL design while inspiring to be commercially successful through insights If you’re ready for a new challenge, and want to bring a competitive edge to your career profile by delivering streaming data ingestions, this could be the role for you We're offering this role at associate vice president level What you’ll do Your daily responsibilities will include you developing a comprehensive knowledge of our data structures and metrics, advocating for change when needed for product development. You’ll also provide transformation solutions and carry out complex data extractions. We’ll expect you to develop a clear understanding of data platform cost levels to build cost-effective and strategic solutions. You’ll also source new data by using the most appropriate tooling before integrating it into the overall solution to deliver it to our customers. You’ll Also Be Responsible For Driving customer value by understanding complex business problems and requirements to correctly apply the most appropriate and reusable tools to build data solutions Participating in the data engineering community to deliver opportunities to support our strategic direction Carrying out complex data engineering tasks to build a scalable data architecture and the transformation of data to make it usable to analysts and data scientists Building advanced automation of data engineering pipelines through the removal of manual stages Leading on the planning and design of complex products and providing guidance to colleagues and the wider team when required The skills you’ll need To be successful in this role, you’ll have an understanding of data usage and dependencies with wider teams and the end customer. You’ll also have experience of extracting value and features from large scale data. We’ll expect you to have experience of ETL technical design, data quality testing, cleansing and monitoring, data sourcing, exploration and analysis, and data warehousing and data modelling capabilities. You’ll Also Need Experience of using programming languages alongside knowledge of data and software engineering fundamentals Good knowledge of modern code development practices Great communication skills with the ability to proactively engage with a range of stakeholders Show more Show less

Posted 2 days ago

Apply

6.0 - 9.0 years

10 - 19 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Mandatory skills* AWS, KAFKA, ETL, Glue, Lamda, Tech stack experience Required Phyton, SQL

Posted 2 days ago

Apply

6.0 - 11.0 years

13 - 22 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

Aws Glue - Mandatory Aws S3 and AWS lambada - should have some experience Must have used snowpipe to build integration pipelines. how to build procedure from scratch. write complex Sql queries writing complex Sql queries python-Numpy and pandas

Posted 2 days ago

Apply

4.0 - 8.0 years

20 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

Role & responsibilities: Building automated pipelines and solutions for data migration/data import or other operations requiring data ETL. Performing analysis on core products to support migration planning and development. Working closely with the Team Lead and collaborating with other stakeholders to gather requirements and build well architected data solutions. Produce supporting documentation, such as specifications, data models, relation between data and others, required for the effective development, usage and communication of the data operations solutions with different stakeholders. Competencies, Characteristics and Traits: Mandatory Skills - Total experience - 5 years, of which Minimum 3 years of Experience with SnapLogic pipeline development and minimum of 2 years in building ETL/ELT Pipelines, is needed. Experience working with databases on-premises and/or cloud-based environments such as MSSQL, MySQL, PostgreSQL, AzureSQL, Aurora MySQL & PostgreSQL, AWS RDS etc. Experience working with API sources and destinations Strong problem solving and analytical skills, high attention to detail Passion for analytics, real-time data, and monitoring Critical Thinking and collaboration skills Self-starter and a quick learner, ready to learn new technologies and tools that the job demands Preferred candidate profile: Essential: Strong experience working with databases on-premises and/or cloud-based environments such as MSSQL, MySQL, PostgreSQL, AzureSQL, Aurora MySQL & PostgreSQL, AWS RDS etc. Strong knowledge of databases, data modeling and data life cycle Proficient in understanding data and writing complex SQL Mandatory Skills - Total experience - 5 years, of which Minimum 3 years of Experience with SnapLogic pipeline development and minimum of 2 years in building ETL/ELT Pipelines, is needed. Experience working with REST API in data pipelines Strong problem solving and high attention to detail Passion for analytics, real-time data, and monitoring Critical Thinking, good communication and collaboration skills Focus on high performance and quality delivery Highly self-motivated and continuous learner Desirable: Experience working with no-SQL databases like MongoDB Experience with Snaplogic administration is preferable Experience working with Microsoft Power Platform (PowerAutomate and PowerApps) or any similar automation / RPA tool Experience with cloud data platforms like snowflake, data bricks, AWS, Azure etc. Awareness of emerging ETL and Cloud concepts such as Amazon AWS or Microsoft Azure Experience working with Scripting languages, such as Python, R, JavaScript, etc.

Posted 2 days ago

Apply

0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

The purpose of this role is to work with the business and sing Adobe Analytics, Google Analytics, WebTrends and other technologies. This role serves as a subject matter expert on tag management and audience platforms, and provides guidance and oversight to other web audience resources. This role is able to provide direction and guidance on integration of marketing technologies and tools. Job Description: Mandatory (top 5) Experience in Leading Customer Data Platform related development Experience on Python or Java Scripting or Node.JS Experience with APIs (REST, Open APIs, CURL) Data analysis, ingestions, modelling and mapping Experience on unstructured data using JSON/Parquet file format Preferred (top 5) Understanding of CCPA, GDPR and other Data Protection Acts Client facing experience Experience with Reporting Technologies Experience on any AEP, ActionIQ , Lytics, Segment, Tealium, C360 Big Data ETLs Leading the team in implementation of technical solutions and driving proof of concepts Responsible for building data model based on the gathered requirements and data architecture. Reponsible for data ingestion into the cdp platform via batch and real time streaming modes - using ETL,API,JavaScript etc Responsible for data extraction/outbound dataflows to reporting tool, other adobe product or 3rd party system Reponsible for working on defined business rules/transformations/BRDs Colloborate with internal teams to develop solutions for CDP platform for marketing activation use cases. Should be able to work on queries, segment development, audience creation/activations, customer journey orchestrations Should be able to define segments and orchestrate customer journey based on the BRD Location: Bangalore Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less

Posted 2 days ago

Apply

3.0 - 8.0 years

6 - 18 Lacs

Hyderabad

Work from Office

Naukri logo

Mandatory skills for Data engineer: Python/Pyspark, Aws Glue, lambda , redshift. Python/Pyspark, Aws Glue, lambda , redshift, SQL. Expert knowledge in AWS Data Lake implementation and support (S3, Glue,DMS Athena, Lambda, API Gateway, Redshift)

Posted 2 days ago

Apply

5.0 - 7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Position Overview: We are seeking a highly skilled and experienced Senior Microsoft SQL Database Developer to join our team. The ideal candidate will be responsible for designing, developing, and maintaining complex Microsoft SQL databases, ensuring their efficiency, security, and scalability. This position requires deep expertise in Microsoft SQL development, performance tuning, database design, and optimization, along with a solid understanding of database architecture and troubleshooting. The Senior Microsoft SQL Database Developer will work closely with cross-functional teams to ensure the successful delivery of high-quality data solutions. Key Responsibilities: Database Development & Design: Develop and maintain complex SQL queries, stored procedures, triggers, function and views. Design, implement, and optimize relational database schemas. Create and maintain database objects (tables, indexes, views, etc.) ensuring data integrity and optimization. Performance Tuning & Optimization: Analyse query performance and optimize SQL statements for maximum efficiency. Use indexing, partitioning, and other techniques to improve database performance. Monitor and resolve database performance issues using profiling tools and techniques. Data Integration & Migration: Lead data integration efforts between multiple data sources, ensuring accurate data flow and consistency. Support data migration projects, converting and importing data between systems. Collaboration & Communication: Work closely with application developers, data architects, business analysts, and other stakeholders to deliver database solutions. Provide technical guidance and mentoring to junior developers. Participate in code reviews, ensuring SQL best practices are followed and adhered to. Troubleshooting & Issue Resolution: Identify and resolve complex database issues related to performance, functionality, and data integrity. Provide support for production database environments and resolve urgent database issues quickly. Documentation & Reporting: Document database structures, processes, and procedures to ensure consistency and compliance with internal standards. Provide status reports, performance insights, and recommendations to management. Key Qualifications: Education: Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent work experience). Experience: Minimum of 5-7 years of experience in SQL development and database management. Proven experience working with large-scale SQL databases (Microsoft SQL Server, Azure SQL Server). Skills & Expertise: Strong proficiency in SQL, including complex queries, joins, subqueries, indexing, and optimization. Experience with SQL Server, T-SQL, or other relational database management systems. Expertise in database performance tuning, troubleshooting, and query optimization. Strong knowledge of database security best practices (user roles, encryption, etc.). Familiarity with cloud databases and technologies (e.g., Azure SQL, etc.) is a plus. Experience with data integration and ETL processes. Familiarity with version control tools such as Git. Soft Skills: Strong analytical and problem-solving abilities. Ability to work independently and as part of a team. Excellent communication skills, both written and verbal. Strong attention to detail and ability to handle complex tasks with minimal supervision. Preferred Skills: Experience with AZURE SQL Server, SQL Server. Knowledge of database monitoring tools (e.g., SQL Profiler, New Relic, SolarWinds). Familiarity with DevOps practices and CI/CD pipelines related to database development. Experience in agile or scrum development environments. Working Conditions: Full-time position with flexible work hours. Only Work from Office available. Occasional on-call support for production database systems may be required. Why Join Us? Competitive salary and benefits package. Opportunities for career growth and professional development. A dynamic, collaborative, and innovative work environment. Exposure to cutting-edge database technologies and large-scale systems. Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Senior Associate Job Description & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Creating business intelligence from data requires an understanding of the business, the data, and the technology used to store and analyse that data. Using our Rapid Business Intelligence Solutions, data visualisation and integrated reporting dashboards, we can deliver agile, highly interactive reporting and analytics that help our clients to more effectively run their business and understand what business questions can be answered and how to unlock the answers. Required Skills - Degree in Computer Science or a related discipline Minimum 4 years of relevant experience Fluency ability in Python or shell scripting Experience with data mining, modeling, mapping, and ETL process Experience with Azure Data Factory, Data Lake, Databricks, Synapse analytics, BI Dashboard, and BI implementation projects. Hands-on Experience in Hadoop, PySpark, and SQL Spark. Knowledge in Azure / AWS, RESTful Web Service, SOAP, SOA, Microsoft SQL Server, MySQL Server, and Agile methodology is an advantage Strong analytical, problem- solving, and communication skills Excellent command of both written and spoken English. Should be able to Design, Develop, Deliver & maintain Data Infrastructures. Mandatory Skill Set- Hadoop, Pyspark Preferred Skill Set- Hadoop, Pyspark Year of experience required- 4 - 8 Qualifications- B.E / B.Tech Required Skills Hadoop Cluster, PySpark Optional Skills Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Description At Amazon, we strive to be the most innovative and customer centric company on the planet. Come work with us to develop innovative products, tools and research driven solutions in a fast-paced environment by collaborating with smart and passionate leaders, program managers and software developers. This role is based out of our Bangalore corporate office and is for an passionate, dynamic, analytical, innovative, hands-on, and customer-centric Business analyst. Key job responsibilities This role primarily focuses on deep-dives, creating dashboards for the business, working with different teams to develop and track metrics and bridges. Design, develop and maintain scalable, automated, user-friendly systems, reports, dashboards, etc. that will support our analytical and business needs In-depth research of drivers of the Localization business Analyze key metrics to uncover trends and root causes of issues Suggest and build new metrics and analysis that enable better perspective on business Capture the right metrics to influence stakeholders and measure success Develop domain expertise and apply to operational problems to find solution Work across teams with different stakeholders to prioritize and deliver data and reporting Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation Basic Qualifications 5+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience using Advanced SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - BLR 14 SEZ Job ID: A2992205 Show more Show less

Posted 2 days ago

Apply

Exploring ETL Jobs in India

The ETL (Extract, Transform, Load) job market in India is thriving with numerous opportunities for job seekers. ETL professionals play a crucial role in managing and analyzing data effectively for organizations across various industries. If you are considering a career in ETL, this article will provide you with valuable insights into the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Chennai

These cities are known for their thriving tech industries and often have a high demand for ETL professionals.

Average Salary Range

The average salary range for ETL professionals in India varies based on experience levels. Entry-level positions typically start at around ₹3-5 lakhs per annum, while experienced professionals can earn upwards of ₹10-15 lakhs per annum.

Career Path

In the ETL field, a typical career path may include roles such as: - Junior ETL Developer - ETL Developer - Senior ETL Developer - ETL Tech Lead - ETL Architect

As you gain experience and expertise, you can progress to higher-level roles within the ETL domain.

Related Skills

Alongside ETL, professionals in this field are often expected to have skills in: - SQL - Data Warehousing - Data Modeling - ETL Tools (e.g., Informatica, Talend) - Database Management Systems (e.g., Oracle, SQL Server)

Having a strong foundation in these related skills can enhance your capabilities as an ETL professional.

Interview Questions

Here are 25 interview questions that you may encounter in ETL job interviews:

  • What is ETL and why is it important? (basic)
  • Explain the difference between ETL and ELT processes. (medium)
  • How do you handle incremental loads in ETL processes? (medium)
  • What is a surrogate key in the context of ETL? (basic)
  • Can you explain the concept of data profiling in ETL? (medium)
  • How do you handle data quality issues in ETL processes? (medium)
  • What are some common ETL tools you have worked with? (basic)
  • Explain the difference between a full load and an incremental load. (basic)
  • How do you optimize ETL processes for performance? (medium)
  • Can you describe a challenging ETL project you worked on and how you overcame obstacles? (advanced)
  • What is the significance of data cleansing in ETL? (basic)
  • How do you ensure data security and compliance in ETL processes? (medium)
  • Have you worked with real-time data integration in ETL? If so, how did you approach it? (advanced)
  • What are the key components of an ETL architecture? (basic)
  • How do you handle data transformation requirements in ETL processes? (medium)
  • What are some best practices for ETL development? (medium)
  • Can you explain the concept of change data capture in ETL? (medium)
  • How do you troubleshoot ETL job failures? (medium)
  • What role does metadata play in ETL processes? (basic)
  • How do you handle complex transformations in ETL processes? (medium)
  • What is the importance of data lineage in ETL? (basic)
  • Have you worked with parallel processing in ETL? If so, explain your experience. (advanced)
  • How do you ensure data consistency across different ETL jobs? (medium)
  • Can you explain the concept of slowly changing dimensions in ETL? (medium)
  • How do you document ETL processes for knowledge sharing and future reference? (basic)

Closing Remarks

As you explore ETL jobs in India, remember to showcase your skills and expertise confidently during interviews. With the right preparation and a solid understanding of ETL concepts, you can embark on a rewarding career in this dynamic field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies