Home
Jobs

172 Elt Jobs - Page 6

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9 - 14 years

10 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

What does the Senior Analytics Engineer opportunity look like for you? You will play a pivotal role in the development and maintenance of interactive client, investor and operational team facing dashboards using Tableau, Power BI and other visual analytics tools. You will work closely with clients and senior stakeholders to capture their BI requirements, conduct data analysis using SQL, Python (or other open-source languages), and visualise the insights in an impactful manner. In order to be successful in this role we require the following experience Ability to apply advanced working knowledge of conceptual and physical BI implementations to demonstrates they are an expert within the area of Analytics and BI Adept at interacting directly with external clients in a verbal manner as well as having the interpersonal savvy to pick up on the nuances within the conversation from one client to the next and through those interactions be able to conceptualise what the overall end-to-end database architecture and data model would look like from the source to the destination of the data Advanced hands-on experience of working with and querying both structured and unstructured data within data warehouses and data lakes both on-prem and in cloud (Microsoft SQL Server, Azure, AWS or GCP) at least 8 yrs. of demonstrable advanced experience in SQL Server and cloud-based data stores to implement complex and robust solutions Advanced hands-on experience of working with SQL Server Data Tool such as SSIS or Azure Data Factory (at least 8 yrs. of demonstrable experience) Advanced hands-on experience of ETL / ELT methods such as incremental and full load as well as advanced experience of the various tools to implement these methods such as Azure Data Factory, Azure DataBricks, SSIS, Python, dbt, airflow, Alteryx to cater for complex data flows Advanced hands-on experience of implementing dimensional data models within analytics databases / data warehouses Advanced hands-on experience of working with Python / Spark packages for analytical data modelling, data analysis and data flows (panda, NumPy, scikit, etc.) as well as data visualisation (matplotlib, plotly, dash, etc.) to solve complex problems Advanced hands-on experience of BI tools – Power BI, Tableau, etc., (at least 7 yrs. of demonstrable experience) showcasing advanced storytelling and deep insights Advanced experience of various java libraries for front end development and embedding of visuals (i.e. D3, react, node, etc.) Ability to take a broad perspective to identify solutions and work independently with guidance in only the most complex situations Ability to guide others in resolving complex issues Tasks (what does the role do on a day-to-day basis) Engaging with external clients, vendors, project reps, internal stakeholders from operations as well as client services teams to understand their analytics and dashboard requirements and interpret them into technical specifications Creating and maintaining high-level database and model architecture of the BI solution recommending best practice Conducting in-depth exploratory data analysis and define business critical features Working with key teams within Group Technology to get appropriate access and infrastructure setup to develop advanced visualisations (BI dashboards) Creating and optimize SQL Server stored procedures, views, queries as well as queries from multiple data sources to efficiently and effectively retrieve data from databases, employing the fundamentals of dimensional data modelling Designing and creating advanced and robust Data Solutions to develop best practice data warehouse solutions that support business analytics asks Creating, maintaining and document the different analytics solution processes created per project worked on Resolving IT and Data Issues. When database issues arise or support requests come in through Azure DevOps or the help desk, BI Developers are expected to work to resolve these problems. This requires a good understanding of legacy solutions and issues. Ensuring updates to Azure DevOps items, help desk tickets and work in progress are well communicated and escalations regarding the support being provided is kept to a minimum Key competencies for position and level (see Group Competency model) Analytical Reasoning – Ability to identify patterns within a group of facts or rules and use those patterns to determine outcomes about what could / must be true and come to logical conclusions Critical Thinking – Ability to conceptualise, analyse, synthesise, evaluate and apply information to reach an answer or conclusion Conceptual Thinking and Creative Problem Solving - Original thinker that has the ability to go beyond traditional approaches with the resourcefulness to adapt to new / difficult situations and devise ways to overcome obstacles in a persistent manner that does not give up easily. Interpersonal Savvy – Relating comfortably with people across all levels, functions, cultures & geographies, building rapport in an open, friendly & accepting way. Effective Communication – Adjusting communication style to fit the audience & message, whilst providing timely information to help others across the organisation. Encourages the open expression of diverse ideas and opinions. Results / Action Orientated and Determination – Readily taking action on challenges without unnecessary planning and identifies new opportunities, taking ownership of them with a focus on getting the problem solved. Key behaviours we expect to see In addition to demonstrating our Group Values (Authentic, Bold, and Collaborative), the role holder will be expected to demonstrate the following: Facilitate open and frank debate to drive forward improvement Willingness to learn, develop, and keep abreast of technological developments An analytical mind, excellent problem-solving & diagnostic Skills, attention to details

Posted 2 months ago

Apply

7 - 12 years

22 - 27 Lacs

Bengaluru

Work from Office

Naukri logo

Overview Work closely with Data Analyst, Architects to achieve DQ objectives Validating data pipelines and ensuring the quality of datasets Collaborate with the Data Delivery technology teams and architects to define and develop solutions Gather business requirements for the building of the Data Quality Dashboard that will include the full set of functionalities from exception identification to remediation Formulate and actively develop test cases, test scripts, and output report specifications for data remediation projects Maintain up-to-date documentation for the Data quality audits and processes Manage and provide data quality control and perform deep-dive analysis of data, identifies problems, root cause, and solution Able to integrate data QA automation using DQ tools Should be able to research new tools and technologies for project. Come up with innovations which team can get benefit from Mentor QA team and Provide solutions to team issues Troubleshooting skills to optimize performance. Leads a team of 8 data QA and own and prioritizes work allocation based on business need Provides regular coaching and feedback to direct reports Responsibilities 7+ Years of Experience in Data Test Automation, Data Comparison and Validation. Must have experience in ETL/ELT tools and pipeline Working experience with Python libraries such as Pandas, NumPy, and SQL Alchemy for ETL Strong understanding of Data Warehouse/Data Lake Architecture and Development Experience with relational SQL and NoSQL databases Good knowledge of SQL, DB Procedures, Packages, and Functions. Experience with data pipeline and workflow management tools Experience working with relational databases Experience with AWS cloud services: EC2, EMR, RDS, Redshift Experience in SQL injection, and query performance (Athena). Must have CI/Cd knowledge Knowledge of bit bucket/git Excellent interpersonal and communication skills Qualifications Additional Skills Be tool agnostic and recommend new processes and techniques to improve the capability of testing for our clients. Good analytical, Problem-solving skills and communication skills. Leadership/mentorship experience Demonstrate motivation, ownership, compassion, and leadership capabilities within the Team Able to communicate feedback articulately Has analytical way of thinking NICE TO HAVE Understanding of data observability tools like Great expectation, Light Up etc. Understanding of Cloud Technologies in AWS or Azure Experience in advanced Excel Experience in data integration tools like Power BI, Datadog, Altan etc. Have team management experience Knowledge of Atlassian products like Jira & Confluence Supervisory Responsibilities Maximum 5 QA employees

Posted 2 months ago

Apply

3 - 7 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Data Services Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : As per Accenture Standards Summary :As an Application Lead, you will be responsible for leading the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve working with Microsoft Azure Data Services and collaborating with cross-functional teams to deliver impactful solutions. Roles & Responsibilities: Lead the design, development, and deployment of applications using Microsoft Azure Data Services. Act as the primary point of contact for the project, collaborating with cross-functional teams to ensure timely delivery of solutions. Provide technical guidance and mentorship to team members, ensuring adherence to best practices and standards. Conduct detailed analysis of business requirements, translating them into technical specifications and design documents. Ensure the quality and integrity of the application through rigorous testing and debugging. Professional & Technical Skills: Must To Have Skills:Azure Data Factory (Data Pipeline and Framework implementation) SQL Server (Strong SQL Development) including SQL Stored Procedures ETL/ELT , DWH concepts Azure DevOps AZure Blob, Gen1/Gen2 Additional Information: The candidate should have a minimum of 5 years of experience in Microsoft Azure Data Services. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful solutions. This position is based at our Bengaluru office. Qualification As per Accenture Standards

Posted 2 months ago

Apply

10 - 15 years

35 - 40 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a highly skilled and experienced Azure Data Engineer to join our dynamic team. In this critical role, you will be responsible for designing, developing, and maintaining robust and scalable data solutions on the Microsoft Azure platform. You will work closely with data scientists, analysts, and business stakeholders to translate business requirements into effective data pipelines and data models. Responsibilities * Design, develop, and implement data pipelines and ETL/ELT processes using Azure Data Factory, Azure Databricks, and other relevant Azure services. * Develop and maintain data lakes and data warehouses on Azure, including Azure Data Lake Storage Gen2 and Azure Synapse Analytics. * Build and optimize data models for data warehousing, data marts, and data lakes. * Develop and implement data quality checks and data governance processes. * Troubleshoot and resolve data-related issues. * Collaborate with data scientists and analysts to support data exploration and analysis. * Stay current with the latest advancements in cloud computing and data engineering technologies. * Mentor junior data engineers and contribute to the development of best practices. * Participate in all phases of the software development lifecycle, from requirements gathering to deployment and maintenance. Qualifications * 10+ years of experience in data engineering, with at least 5 years of experience working with Azure data services. * Strong proficiency in Python, and other relevant programming languages. * Experience with data warehousing and data lake architectures. * Experience with ETL/ELT tools and technologies, such as Azure Data Factory, Azure Databricks, and Apache Spark. * Experience with data modeling and data warehousing concepts. * Experience with data quality and data governance best practices. * Strong analytical and problem-solving skills. * Excellent communication and collaboration skills. * Experience with Agile development methodologies. * Bachelor's degree in Computer Science, Engineering, or a related field (Master's degree preferred). * Relevant Azure certifications (e.g., Azure Data Engineer Associate) are a plus. Bonus Points: * Experience with machine learning and artificial intelligence. * Experience with containerization technologies (e.g., Docker, Kubernetes). * Experience with DevOps practices and tools.

Posted 2 months ago

Apply

4 - 8 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Overall, 4 to 8 years of experience inIT Industry. Min 4 years ofexperience working on Data Engineering using Azure Databricks, Synapse, ADF/Airflow. At least 3 Project experience in Building andmaintaining ETL / ELT pipelines for large data sets, complex data processing,transformations, business logics, cost monitoring & performanceoptimization, and feature engineering processes. Must Have skills: Extensive experience with Azure Databricks(ADB), Delta Lake, Azure Data Lake Storage (ADLS), Azure Data factory (ADF), AzureSQL Database (SQL DB), SQL, ELT / ETL Pipeline Development in Spark basedenvironment. Extensive Experience with SparkCore, PySpark, Python,SparkSQL, Scala, Azure Blob Storage. Experience in Real-Time Data Processing using Apache Kafka/EventHub/IoT, Structured Streaming and Stream analytics. Experience with Apache Airflow for ELTOrchestration. Experience withinfrastructure management, Infrastructure as code (e.g. Terraform) Experience with CI/CD, Version control tools likeGitHub, Azure DevOps. Experience with Version control tools, buildingCI/CD pipelines. Experience with Azure cloud platform. Good to have: Experience / Knowledge on Containerizations - Docker, Kubernetes Experience working in Agile Methodology Qualifications Qualifications - BE, MS, M.Tech or MCAAdditional Information Certifications: Azure Big Data, Databricks CertifiedAssociate

Posted 2 months ago

Apply

5 - 9 years

18 - 20 Lacs

Navi Mumbai

Work from Office

Naukri logo

Position Overview: We are looking for a skilled and visionary Data Engineering Lead to join our growing team. In this role, you will be responsible for leading a team of data engineers in designing, developing, and maintaining robust data pipelines and infrastructure. You will work closely with cross-functional teams to support data-driven decision-making and ensure the availability, quality, and integrity of our data assets. Role & responsibilities: Build, develop, and maintain efficient and high-performance data pipelines across both cloud and on-premises environments. Ensure the accuracy, adequacy, and legitimacy of data. Prepare ETL pipelines to extract data from various sources and store it in a centralized location. • Analyse, interpret, and present results through effective visualization and reports. Identify critical metrics. Implement and instill best practices for effective data management. Monitor the use of data systems and ensure the correctness, completeness, and availability of data services. Optimize data infrastructure and processes for cost efficiency on AWS cloud and on-premises environments. Utilize Apache Airflow, NiFi, or equivalent tools to build and manage data workflows and integrations. Implement best practices for data governance, security, and compliance. Monitor and troubleshoot data pipeline issues to ensure timely resolution. Stay current with industry trends and emerging technologies in data engineering and cloud computing. Lead and mentor a team of data engineers, fostering a culture of collaboration, innovation, and continuous improvement. Define team objectives and performance metrics, conducting regular performance evaluations and providing constructive feedback. Facilitate knowledge sharing and professional development within the team. Preferred candidate profile: Up to 6-9 years of proven experience as a Data Engineering Lead or in a similar role. Extensive hands-on experience with ETL and ELT processes. Strong expertise in data integrity and quality assurance. Proficiency in optimizing AWS cloud services and on-premises infrastructure for cost and performance. Hands-on experience with Apache Airflow and NiFi. Strong programming skills in languages such as Python, Java, or Scala. Experience with SQL and NoSQL databases. Experience in building and maintaining a single source of truth. Familiarity with data warehousing solutions like Amazon Redshift, Snowflake, or BigQuery. Strong problem-solving skills and the ability to work under pressure. Hands-on experience with data visualization tools such as Tableau, Power BI, or Looker. Experience in financial services is a must. Skills Required: Team leading and team management. Strong analytical and problem-solving abilities. Excellent communication and interpersonal skills. Ability to work collaboratively in a fast-paced environment and manage multiple priorities. Educational Qualifications: Bachelor's degree in Computer Science, Information Technology, Data Science, or a related field.

Posted 2 months ago

Apply

3 - 7 years

5 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Job Summary HighRadius is looking for a dynamic Java professional to join our Engineering Team. This role responsibilities include participation in Software Development activities, writing clean and efficient code for various applications and running tests to improve system functionality. Writes code that is easily maintainable, highly reliable and demonstrates knowledge of common programming best practices. Mentor junior members of the team in delivering sprint stories and tasks. Key Responsibilities Work independently to translate product requirements to working software with high quality Participate in collaborative software development with peers within Team through code review and design discussions Produce highly performant software in cloud native environment Ability to debug performance and functional issues in production Participate in raising the technology bar within team Skill & Experience Needed Bachelors Degree required Experience range : 3+ years Technology Stack : Core Java, Java 8, Hibernate, Spring, SQL Good to have - Ext.Js or any UI framework experience, Elastic search, Cloud architecture (AWS, GCP, Azure) Knowledge of Design patterns, Jenkins, GIT, Grafana, ELT, JUNIT

Posted 2 months ago

Apply

11 - 15 years

13 - 17 Lacs

Gurgaon

Work from Office

Naukri logo

Responsibilities As a member of the data engineering team, you will be the key technical expert developing and overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics . You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. Important Disclaimer : The candidate is required to work for 4 weekends in a quarter (you may be required to work only on Saturday or Sunday or Sat & Sun) and basis that you'll get compensatory off. Please note that this role will be based ONLY in India. The role does not involve any movement to other PepsiCo offices outside India in future Responsibilities Active contributor to code development in projects and services. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementingbest practices around systems integration, security, performance and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to productionalize data science models. Define and manage SLAs for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 11+ years of overall technology experience that includes at least 4+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 3+ years in cloud data engineering experience in Azure. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Fluent with Azure cloud services. Azure Certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake. Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like Github and deployment & CI tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI). Education B Tech/BE in Computer Science, Math, Physics, or other technical fields. Skills, Abilities, Knowledge Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals Ability to lead others without direct authority in a matrixed environment.

Posted 2 months ago

Apply

3 - 8 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

About The Role : Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Must have technical skills: 4 years+ on Snowflake advanced SQL expertise 4 years+ on data warehouse experiences hands on knowledge with the methods to identify, collect, manipulate, transform, normalize, clean, and validate data, star schema, normalization / denormalization, dimensions, aggregations etc, 4+ Years experience working in reporting and analytics environments development, data profiling, metric development, CICD, production deployment, troubleshooting, query tuning etc, 3 years+ on Python advanced Python expertise 3 years+ on any cloud platform AWS preferred hands on experience on AWS on Lambda, S3, SNS / SQS, EC2 is bare minimum, 3 years+ on any ETL / ELT tool Informatica, Pentaho, Fivetran, DBT etc. 3+ years with developing functional metrics in any specific business vertical (finance, retail, telecom etc), Must have soft skills: Clear communication written and verbal communication, especially with time off, delays in delivery etc. Team Player Works in the team and works with the team, Enterprise Experience Understands and follows enterprise guidelines for CICD, security, change management, RCA, on-call rotation etc, Nice to have: Technical certifications from AWS, Microsoft, Azure, GCP or any other recognized Software vendor, 4 years+ on any ETL / ELT tool Informatica, Pentaho, Fivetran, DBT etc. 4 years+ with developing functional metrics in any specific business vertical (finance, retail, telecom etc), 4 years+ with team lead experience, 3 years+ in a large-scale support organization supporting thousands of users, Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries, Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Competencies Client Centricity Passion for Results Execution Excellence Problem Solving & Decision Making Effective communication

Posted 2 months ago

Apply

5 - 9 years

9 - 19 Lacs

Pune, Bengaluru, Coimbatore

Work from Office

Naukri logo

Job Title: Azure Data Engineer - Manager (5-9 Years of Experience) Location: [Bangalore, Coimbatore, Pune] Employment Type: Full-Time Job Description: We are looking for an experienced Azure Data Engineer with 9-13 years of experience to take on a strong Technical and Managerial role within our organization. The ideal candidate will have a strong technical background in Databricks , Azure Data Services , and data engineering, coupled with proven leadership and team management skills. This role requires a balance of hands-on technical expertise and the ability to lead, mentor, and manage a team of data engineers to deliver high-quality data solutions. Key Responsibilities: Technical Responsibilities: Data Pipeline Development: Design, develop, and optimize scalable data pipelines using Databricks , Azure Data Factory , and other Azure data services. Implement advanced ETL/ELT processes to handle large volumes of data from diverse sources. Ensure data pipelines are efficient, reliable, and scalable to meet business needs. Data Processing and Analytics: Write and optimize complex SQL queries for data extraction, transformation, and analysis. Use PySpark for large-scale data processing, transformation, and analytics. Implement data partitioning, indexing, and caching strategies for optimal performance. Data Integration and Governance: Integrate data from multiple sources, including structured, semi-structured, and unstructured data. Implement data governance practices to ensure data quality, consistency, and security. Monitor and troubleshoot data pipelines to ensure data accuracy and availability. Architecture and Design: Define and implement data architecture best practices, including data lake and data warehouse design. Collaborate with cross-functional teams to design and deliver end-to-end data solutions. Evaluate and recommend new tools and technologies to enhance data engineering capabilities. Managerial Responsibilities: Team Leadership: Lead, mentor, and manage a team of data engineers, ensuring high performance and professional growth. Assign tasks, set priorities, and ensure timely delivery of projects. Conduct regular team meetings, performance reviews, and one-on-one sessions. Project Management: Oversee the end-to-end delivery of data engineering projects, ensuring alignment with business goals. Collaborate with stakeholders to define project scope, timelines, and deliverables. Manage project risks, issues, and dependencies to ensure successful project execution. Stakeholder Collaboration: Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver solutions. Act as a bridge between technical teams and business stakeholders, ensuring clear communication and alignment. Process Improvement: Identify opportunities for process improvement and implement best practices in data engineering. Drive innovation and continuous improvement within the team. Must-Have Skills: Databricks: Extensive hands-on experience with Databricks for data processing, analytics, and machine learning. Azure Data Services: Proficiency in Azure Data Factory, Azure Synapse Analytics, Azure Data Lake Storage, and Azure SQL Database. SQL: Expertise in writing and optimizing complex SQL queries. PySpark: Advanced knowledge of PySpark for large-scale data processing and transformation. ETL/ELT: Strong understanding of ETL/ELT processes and tools. Data Modeling: Deep knowledge of data modeling techniques and best practices. Leadership: Proven experience in leading and managing teams of data engineers. Project Management: Strong project management skills with the ability to manage multiple projects simultaneously. Good-to-Have Skills: Experience with Azure DevOps for CI/CD pipelines. Knowledge of Delta Lake for building reliable data lakes. Familiarity with streaming data technologies like Azure Event Hubs or Kafka. Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, or a related field. 9-13 years of experience in data engineering, with a focus on Azure and Databricks. Proven experience in a leadership or managerial role, leading teams of 5+ members. Relevant certifications such as Microsoft Certified: Azure Data Engineer Associate or Databricks Certified Associate Developer are a plus. Soft Skills: Strong leadership and team management skills. Excellent communication and interpersonal skills. Ability to work in a fast-paced, dynamic environment. Strong problem-solving and analytical skills. Self-motivated with a strong sense of ownership and accountability.

Posted 2 months ago

Apply

6 - 8 years

25 - 27 Lacs

Navi Mumbai

Hybrid

Naukri logo

Huntsman Corporation is now looking for a dynamic individual as a Pricing and Commercial Excellence Manager. The ideal candidate will be located in Mumbai, India. As a Pricing and Commercial Excellence Manager you will: Lead the Pricing Excellence Strategy and partner the Sales Team to drive right price to customer Regular analysis of material customer combinations to drive higher value for the Business Continuously work to benchmark value creation with market and competition Support strategic initiatives for the business to drive performance while monitoring financial goals Lead ongoing pricing activities across product, sales, marketing and finance Drive value capture by implementing price strategy and managing deal execution Set pricing targets Manage pricing organization Develop internal and external communications related to pricing (e.g. price change communications) Lead development of long-term pricing strategy for complex products/ services Lead the pricing work group to - Determine pricing resource allocation - Achieve cross functional coordination needed to develop and implement pricing strategy Design and disseminate best practices in pricing process and execution PMO Lead to drive Strategic and Tactical Business Initiatives Lead Execution of Strategic Projects- relevant to 2022 and 2025 Lead Achievement of JVC targets Regular review of business risks & opportunities. Proactively identify mitigation plan and highlight to leadership team for quick decisions Working on Churn rate reduction (Flagging and working on lost customers)Track success metrics for each Distributors- track Volumes, BD, Pricing, CM Monthly ISC newsletter- market macros, key developments, pricing recommendation, competition activities, new business wins, progress of large projects, innovation progress Monitor performance of key competitors from available reports/sales calls, summarize key highlights and create value for business What skills and experiences are we looking for? Chartered Accountant/ MBA with a total post qualification experience of 6 years with relevant experience in pricing of minimum 2 to 3 years Change management attitude Strategic planning and leadership Implementation of new processes/ systems Strong communication and interpersonal skills Ability to influence cross functional stakeholders through data and rationale Ability to create business decks for presentation to IMT/ELT Ability to work as an individual contributor Strategic ability to develop high-impact pricing strategies Analytical ability to understand and define critical issues (Benefit/Value map, pre-approved discount level, deal waterfall, volume impact) Strong negotiating skills based on a thorough understanding of sales / marketing practices What can we offer? Huntsman offers unsurpassed opportunities to build a successful future. We are a global specialty chemical company with locations in 25 countries around the world, employing over 6,000 associates. Our diverse portfolio creates a range of career fields including manufacturing, research and development, technical services, sales and marketing, customer service and the list goes on. Here, you can make an impact and make a difference. Come join us. Huntsman is proud to be an equal opportunity workplace and is an affirmative action employer. We provide equal employment opportunities (EEO) to all qualified applicants for employment, without regard to race, color, religion, sex, national origin, disability status, protected veteran status, gender identification, sexual orientation and/or expression or any other characteristic protected by law in every location in which we have facilities national or local. Please refer to https://www.huntsman.com/privacy/online-privacy-notice for Company’s Data Privacy and Protection information. All unsolicited resumes presented by recruitment agencies are treated as pro bono information or service. Huntsman is aware of a scam involving fraudulent job offers. Huntsman does not make job offers until after a candidate has submitted a job application and has participated in a face-to-face interview. Please be advised that emails from Huntsman always end in “@huntsman.com” and that any job offer that requires payment or requires you to deposit a check is likely a scam. If you have questions about any open positions at Huntsman, please visit our Careers website at http://www.huntsman.com/corporate/a/Careers.

Posted 2 months ago

Apply

6 - 10 years

20 - 25 Lacs

Delhi NCR, Mumbai, Bengaluru

Work from Office

Naukri logo

Develop and automate processes for gathering of expected results from data sources and comparing data results from testing. Assist with the development and maintenance of smoke, performance, functional, and regression tests to ensure code is functioning as designed. Work with the team to understand how changes in the software product affect maintenance of test scripts and the automated testing environments. Own the test automation framework and build appropriate test automation where needed. Write, monitor, execute, and evaluate application tests using industry standard automated testing tools. Set up data, tools, and databases to facilitate the testing process. Develop and maintain constructive working relationships within team and other functional teams Skills & Experience: Lead a QA team of minimum 3 team members 6+ years of technical QA experience with minimal 2 years of automation testing Experience writing test code in Python, Pytest or Robot Strong SQL skills and use of big data cloud platforms Experience writing/maintaining automated tests for Big Data projects. Experience/knowledge working on Apache Spark. Knowledge of a BDD framework like Cucumber or SpecFlow Having excellent knowledge and experience of Data Testing Strategies Data validation, Process validation, Outcome validation, code coverage Execute automated Big Data Testing Tasks such as Pen testing, Architecture Testing, Migration Testing, Performance Testing, Security Testing, Visualization testing Automate Testing of Relational, Flat Files, XML, NoSQL, Cloud, and Big Data Sources Hands-on ETL Validator Testing Tool for automating the ETL/ELT validation Experience in test setup, software installations and pipelines in CI /CD environment Hands-on experience using monitoring tools like new relic, Grafana, etc Behavioural Fit: Highly technical with a keen eye for detail. Driven, self-motivated and results oriented. Confident and an ability to challenge if necessary. Structured and organised. Ability to work in a cross-functional, multi-cultural team and in a collaborative environment with minimal supervision. Ability to multi-task and plan, organize and prioritize multiple projects. Role Key Performance Indicators: Testing Task completion within the time frame. Automation and regression testing percentage goal in every quarter Report and communicate issues to the scrum master, relevant team members and stakeholders. Take ownership of the testing task. Quality and consistency of data across the whole data landscape. Quality of documentation. Location : - Remote

Posted 2 months ago

Apply

5 - 7 years

7 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Role description: The HSW specialist will be responsible for delivering elements of our Global HSW strategic objectives to achieve our HSW Vision of Protecting our People. This role focuses on automation, data management, and reporting to streamline processes and enhance data-driven decision-making. Role accountabilities: Automation development: Design, implement and optimize automated systems and tools to streamline data collection and reporting processes, reducing manual workload and improving data accuracy Data management: Oversee the collection, validation, and storage of Health, Safety and Wellbeing (HSW) data, ensuring it is comprehensive, high-quality and accessible for analysis Trend Analysis : Analyse HSW data to identify trends, patterns, and potential risks. Provide actionable insights and recommendations to leadership teams to inform decision-making and improve overall HSW strategies Reporting : Prepare and manage both internal and external HSW reports, ensuring compliance with organizational policies and regulatory requirements. Customize reports to meet the needs of various stakeholders, including executive leadership and external auditors Stakeholder Collaboration : Work closely with HSW, IT, and People Teams to ensure data systems are integrated and aligned with organizational goals. Collaborate with team members to support the development of dashboards and visualization tools Continuous Improvement : Stay updated with the latest advancements in data analytics and automation technologies. Proactively recommend enhancements to current systems and processes to improve efficiency and effectiveness Training and Support : Provide training and guidance to HSW teams on how to use data platforms and reporting tools effectively. Ensure that stakeholders understand how to interpret data insights and use them to drive HSW outcomes Compliance and Data Security : Ensure that all HSW data management practices comply with relevant data protection laws and internal policies, maintaining high standards of data security and privacy Global HSW Reporting System: Administer the Cority reporting system, ensuring data integrity and management, supporting configuration and customization, reporting and analytics, and audit and compliance. Systems and applications: Build simple applications using existing Arcadis tools and proficiently navigate the Sphera, Cority and Oracle platforms. Take on additional duties as requested by the H&S and H&W Director, which may encompass various aspects of other HSW specialists to support the delivery of our global HSW strategy, including operation, culture, capability, and communications. Qualifications & Experience: Educational background: Bachelors or masters degree in health, Safety, Environmental Science, Occupational Health, or a related field. Professional experience: At least 5-7 years of experience in Health, Safety and Wellbeing roles, preferably within a multinational organization with a focus on data management and automation Proven experience in developing H&S metrics and non-financial reporting Technical skills: Proficiency in using Microsoft suite of technologies, such as Power BI, Power Apps, and Power Automate, to solve complex business problems Proven experience using Excel, Python, of R. Proficiency in analyzing data using R and/or Python Hands-on experience in developing dashboards using BI tools Experience in data transformation (ELT) and proficiency working with (RDBMS) Good understanding of statistical analysis and hypothesis testing Knowledge in Robotic Process Automation (RPA) is highly desirable Analytical and problem-solving skills: Strong analytical capabilities and critical thinking skills. Ability to translate a long-term HSW vision into an implementable strategy. Experience in driving organizational changes to continuously improve HSW outcomes. Communication and interpersonal skills: Fluent English communication skills, both written and verbal, to effectively collaborate with global and regional teams; including the capability to speak credibly and with gravitas about HSW topics. Ability to communicate analysis with clarity and precision. Collaborative team player with hands-on approach and can-do attitude. Adaptability and innovation: Ability to thrive in a fast-paced and continuous evolving environment. Out-of-the-box thinker with a passion for HSW to drive more people engagement. Ability to inspire colleagues, clients and potential partners with innovative ideas and discipline combined with pragmatism to implement them rapidly. Ability to adapt quicky in handling multiple/evolving tasks as a result of new engagements and/or re-prioritized deadlines

Posted 2 months ago

Apply

8 - 12 years

15 - 25 Lacs

Bengaluru, Kochi, Coimbatore

Work from Office

Naukri logo

Candidates with 8 years of experience in with Azure, Unity Catalog, SQL, Python, Databricks, and ETL. Hands on exposure in Spark for big data processing. Deep understanding of ETL/ELT processes and building pipelines, CI/CD and DevOps practices Required Candidate profile The ideal candidate will have strong leadership experience, hands-on technical skills, and a deep understanding of cloud-based data solutions. Exposure in Autoloader, DLT streaming,

Posted 2 months ago

Apply

12 - 18 years

20 - 35 Lacs

Bengaluru, Kochi, Coimbatore

Work from Office

Naukri logo

Candidates with 12 years of experience in IT with 5+ years as Data Architect. Working experience in the areas of Data Engineering (Azure Data Factory, Azure Synapse, Data Lake, Databricks, Streaming Analytics) DevOps practices for CI/CD pipelines Required Candidate profile Designing & Implementation experience for both ETL/ ELT based solutions, SQL, Spark/Scala & Data Analytics. Working experience Logic Apps, API Management, SOLID design principles & modelling methods

Posted 2 months ago

Apply

5 - 10 years

7 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

About this role: Wells Fargo is seeking a Lead Software Engineer In this role, you will: Lead complex technology initiatives including those that are companywide with broad impact Act as a key participant in developing standards and companywide best practices for engineering complex and large scale technology solutions for technology engineering disciplines Design, code, test, debug, and document for projects and programs Review and analyze complex, large-scale technology solutions for tactical and strategic business objectives, enterprise technological environment, and technical challenges that require in-depth evaluation of multiple factors, including intangibles or unprecedented technical factors Make decisions in developing standard and companywide best practices for engineering and technology solutions requiring understanding of industry best practices and new technologies, influencing and leading technology team to meet deliverables and drive new initiatives Collaborate and consult with key technical experts, senior technology team, and external industry groups to resolve complex technical issues and achieve goals Lead projects, teams, or serve as a peer mentor Required Qualifications: 5+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: 4+ years of Spark development experience 4+ years of Scala/Java development for Spark focusing on functional programming paradigm Spark SQL, Streaming and dataframe/dataset API experience Spark query tuning and performance optimization SQL &NOSQL database integration with Spark (MS SQL server and MongoDB) Deep understanding of distributed systems (CAP theorem, partition and bucketing, replication memory layouts, consistency) Deep understanding of Hadoop Cloud platforms, HDFS, ETL/ELT process and Unix shell scripting Job Expectations: Experience in working Agile development methodology, GIT and JIRA Experience/working knowledge of technologies like Kafka, Cassandra, Oracle RDBMS and JSON structures Python development with/without Spark Experience of Banking/Financial domain

Posted 2 months ago

Apply

4 - 7 years

6 - 9 Lacs

Kolkata

Work from Office

Naukri logo

Overview. We are seeking a skilled Data Engineer with 4+ years of experience to join our dynamic team at Evoort. The ideal candidate will have expertise in SQL, Python, Snowflake, Azure Services, and Databricks, with a strong ability to work with large datasets and optimize data pipelines. Responsibilities. Design, develop, and maintain scalable data pipelines and ETL processes to support business and analytics needs. Work extensively with SQL and Python to process, transform, and manage structured and unstructured data. Utilize Snowflake and Azure Services to build efficient, high-performance data models and workflows. Develop and optimize data solutions using Databricks, ensuring smooth data integration and transformation. Collaborate with cross-functional teams, including Data Analysts and Business Stakeholders, to ensure seamless data accessibility. Monitor data pipelines for performance, reliability, and cost optimization. Stay up to date with emerging trends in cloud-based data engineering, analytics, and big data technologies. Requirements. 4+ years of hands-on experience as a Data Engineer. Strong proficiency in SQL, Python, and Snowflake for data processing and transformation. Experience working with Microsoft Azure Services for cloud-based data solutions. Expertise in Databricks for data engineering and analytics. Ability to work with large-scale datasets and optimize ETL/ELT pipelines. Strong problem-solving skills and the ability to work in a collaborative, fast-paced environment. (ref:hirist.tech). Show more Show less

Posted 3 months ago

Apply

10 - 15 years

15 - 19 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Data Architect Location: Mahadevpura, Bangalore Experience: 8+ Years (Certification in cloud technologies or data engineering tools) Job Description: We are seeking a skilled Data Architect to join our team This role involves designing and implementing data integration workflows, optimizing data pipelines, and collaborating with cross-functional teams to ensure data systems align with business objectives Key Responsibilities: Data Integration: Design and implement data integration workflows (APIs, ETL/ELT tools, data lakes) to connect disparate systems Data Engineering: Develop scalable data pipelines using industry-standard tools and languages Optimize data storage, retrieval, and processing workflows Collaboration: Partner with data analysts, data scientists, and business teams to deliver clean, usable datasets Work with IT teams for seamless deployment Technology & Tools: Evaluate and implement data management tools (ETL platforms, cloud services, databases) while ensuring compliance with governance, privacy, and security policies Documentation & Reporting: Maintain documentation for integration and engineering processes, providing regular updates on system performance and issues Required Skills & Qualifications: Education: Bachelors degree in Computer Science, Information Technology, Data Science, or related field Technical Skills: Proficiency in ETL/ELT tools (Talend, Informatica, Apache NiFi) Strong programming skills in Python, SQL, Scala, or Java Experience with cloud platforms (AWS, Azure, GCP) and services (S3, Redshift, BigQuery) Knowledge of SQL/NoSQL databases and data warehouse systems Familiarity with orchestration tools (Apache Kafka, Apache Airflow) Soft Skills: Strong problem-solving, analytical skills, and excellent communication Attention to detail with a focus on quality Preferred Qualifications: Experience with big data technologies (Hadoop, Spark) Knowledge of data governance/compliance standards (GDPR, CCPA) Certification in cloud technologies or data engineering tools

Posted 3 months ago

Apply

6 - 9 years

5 - 13 Lacs

Mumbai Suburbs, Navi Mumbai, Mumbai (All Areas)

Work from Office

Naukri logo

Data Engineer Lead with deep expertise in ETLs, ELTs, data integrity, and cost optimization of AWS cloud ETL and ELT processes

Posted 3 months ago

Apply

3 - 6 years

3 - 7 Lacs

Chennai

Work from Office

Naukri logo

Developand set up the transformation of data from sources to enable analysis anddecision making. Maintaindata flow from source to the designated target without affecting the crucialdata flow and to play a critical part in the data supply chain, by ensuringstakeholders can access and manipulate data for routine and ad hoc analysis. Implementprojects focused on collecting, aggregating, storing, reconciling, and makingdata accessible from disparate sources. Providesupport during the full lifecycle of data from ingesting through analytics toaction. Analyzeand organize raw data. Evaluate business needs and objectives. Interprettrends and patterns. Conduct complex data analysis and report on results. Coordinatewith source team and end-user and develop solutions. Implementdata governance policies and support data-versioning processes. Maintain security and dataprivacy. Requirements Must Have: Proven hands-on experience inbuilding complex analytical queries in Teradata. 4+ years of extensiveprogramming experience in Teradata Tools and Utilities. Hands-on experience in Teradatautilities such as Fast Load, Multi Load, BTEQ, and TPT. Experience in data qualitymanagement and best practices across data solution implementations. Experience in developmenttesting and deployment, coding standards, and best practices. Experience in preparingtechnical design documentation. Strong team collaboration andexperience working with remote teams. Knowledge in data modelling anddatabase management such as performance tuning of the Enterprise DataWarehouse, Data Mart, and Business Intelligence Reporting environments, andsupport the integration of those systems with other applications. Good to have: Should be good in Unix Shellscripting. Experience in DataTransformation using ETL/ELT tools. Experience in differentrelational databases (i.e. Teradata, Oracle, PostgreSQL). experience with CI/CDdevelopment and deployment tools (i.e. Maven, Jenkins, Git, Kubernetes).

Posted 3 months ago

Apply

6 - 8 years

8 - 11 Lacs

Pune

Work from Office

Naukri logo

Role Description An Engineer is responsible for designing and developing entire engineering solutions to accomplish business goals. Key responsibilities of this role include ensuring that solutions are well architected, with maintainability and ease of testing built in from the outset, and that they can be integrated successfully into the end-to-end business process flow. They will have gained significant experience through multiple implementations and have begun to develop both depth and breadth in several engineering competencies. They have extensive knowledge of design and architectural patterns. They will provide engineering thought leadership within their teams and will play a role in mentoring and coaching of less experienced engineers. Your key responsibilities Design, develop and maintain data pipelines using Python and SQL programming language on GCP. Experience in Agile methodologies, ETL, ELT, Data movement and Data processing skills. Work with Cloud Composer to manage and process batch data jobs efficiently. Develop and optimize complex SQL queries for data analysis, extraction, and transformation. Develop and deploy google cloud services using Terraform. Implement CI CD pipeline using GitHub Action Consume and Hosting REST API using Python. Monitor and troubleshoot data pipelines, resolving any issues in a timely manner. Ensure team collaboration using Jira, Confluence, and other tools. Ability to quickly learn new any existing technologies Strong problem-solving skills. Write advanced SQL and Python scripts. Certification on Professional Google Cloud Data engineer will be an added advantage. Your skills and experience 6+ years of IT experience, as a hands-on technologist. Proficient in Python for data engineering. Proficient in SQL. Hands on experience on GCP Cloud Composer, Data Flow, Big Query, Cloud Function, Cloud Run and well to have GKE Hands on experience in REST API hosting and consumptions. Proficient in Terraform/ Hashicorp. Experienced in GitHub and Git Actions Experienced in CI-CD Experience in automating ETL testing using python and SQL. Good to have APIGEE. Good to have Bit Bucket

Posted 3 months ago

Apply

4 - 8 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

What will your essential responsibilities include Moderate to Extensive years of experiences in relevant architecture roles; possess broad and deep expertise in insurance (especially cyber) data strategy and architecture. Working experience with organizations with multiple entities operated globally is preferred. Understanding of security committees such as CIFIUS. Deep expertise in distributed and decentralized domain design to support multi entity model architecture. Understanding of data mesh concepts and applicability to support multi entity architecture. Understanding of federated governance approaches to support multi entity model. Hands-on experience with architecture and design with: Azure cloud platform environment, Big data technologies, Machine Learning, AI, LLM knowledge, especially on how to apply the concepts and capabilities to Cyber Data Analytics solutions. Message queues, streaming technologies, and event driven architecture. Unstructured data management. O365 environment, Container & Orchestration platforms, Relational databases and structured query languages, Microservices. Integration and ETL/ELT technologies, Data management and analytics, Reference & Master Data Management, Access control and security. Effective management and leadership skills with the ability to influence departments strategy. Outstanding organizational skills with attention to detail and ability to handle change. Excellent presentation, communication (oral & written), and relationship building skills, across all levels of management. Excellent problem solving and analysis skills. Knowledge and active use of Agile, SCRUM and Continuous Delivery. You will report to Principal Architect.

Posted 3 months ago

Apply

3 - 6 years

8 - 18 Lacs

Pune

Hybrid

Naukri logo

We are seeking a highly skilled Senior Engineer with expertise in ETL processes, SQL, GCP (Google Cloud Platform), and Python. As a Senior Engineer, you will play a key role in the design, development, and optimization of data pipelines and workflows that drive business insights and analytics. You will collaborate closely with cross-functional teams to ensure data systems are scalable, robust, and efficient. Key Responsibilities : Design & Develop ETL Pipelines : Build, maintain, and optimize scalable ETL workflows to ingest, transform, and load large datasets using best practices. Database Management : Write efficient, optimized SQL queries to extract, manipulate, and aggregate data from relational databases. Design and implement database schemas for optimal performance. Cloud Infrastructure : Utilize Google Cloud Platform (GCP) services, such as BigQuery, Dataflow, Pub/Sub, and Cloud Storage, to develop and manage cloud-based data solutions. Automation & Scripting : Use Python to automate processes, build custom data transformation logic, and integrate with various data systems and services. Performance Tuning : Ensure the performance of data pipelines and queries are optimized for speed and cost. Troubleshoot issues, implement best practices, and improve system performance. Collaboration & Mentorship : Collaborate with data analysts, data scientists, and other stakeholders to understand data requirements. Provide mentorship and guidance to junior engineers. Data Quality & Governance : Ensure high-quality data through validation, error handling, logging, and monitoring of ETL processes. Implement data governance best practices to maintain consistency and integrity. Documentation & Reporting : Document data pipeline designs, coding standards, and best practices. Create reports for stakeholders to provide insights into data processing activities. Required Skills and Qualifications : ETL Expertise : Strong experience in building, deploying, and optimizing ETL processes with tools like Apache Airflow, Dataflow, or custom Python scripts. SQL Proficiency : Advanced SQL skills with experience in writing complex queries, optimizing performance, and working with large datasets. Cloud Platform (GCP) : Deep understanding and hands-on experience with Google Cloud Platform, specifically BigQuery, Cloud Storage, Pub/Sub, Dataflow, and other data-related services. Python : Proficient in Python, especially for data manipulation, ETL automation, and integration with cloud-based solutions. Data Engineering Best Practices : Familiarity with modern data engineering frameworks, version control, CI/CD pipelines, and agile methodologies. Problem Solving : Strong analytical and troubleshooting skills with a focus on identifying solutions to complex data engineering challenges. Communication : Excellent communication skills, able to work effectively in a team and engage with non-technical stakeholders. Preferred Qualifications : Experience with other cloud platforms such as AWS or Azure. Knowledge of data lake and data warehouse architectures. Familiarity with containerization (Docker, Kubernetes) and orchestration tools. Understanding of data privacy, security, and compliance best practices. Education & Experience : Bachelors degree in Computer Science, Engineering, or a related field (Master’s preferred). 5+ years of experience in data engineering or a related technical field. Proven experience in designing and implementing data solutions in a cloud environment. Role & responsibilities

Posted 3 months ago

Apply

6 - 9 years

10 - 15 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

We are seeking Data Engineers for our Enterprise Integration Team. As a Data Engineer at Paramount Global, you will help us unlock the full potential of our real-time and relational data and provide our businesses with insights that allow us to make better and faster decisions. The ideal candidate must have expertise in developing Python-based data integrations and APIs. Responsibilities: Building APIs and data pipelines using Python Orchestrating data workflows Able to work in a time-sensitive project environment where management of competing priorities is a must have skill. Participate in meetings with engineers and business users. Basic qualifications: Minimum 6+ years of hands-on experience as a data engineer. Strong ability to develop solutions with Python and SQL Possesses in-depth knowledge of enterprise integration and automation patterns. (ESB, ETL, ELT, API broker, consolidating and rationalizing microservices) Additional Qualifications: Database Programming Scripting in a Linux environment Solid grasp of Cloud Data Warehousing platforms such as Snowflake, Redshift, or BigQuery. Hands-on experience with Amazon Web Services. Ability to function collaboratively as part of a fast-paced, customer-oriented team, perform effectively as an independent producer under broad management direction, and a demonstrated willingness to support the team on all levels to get the job done. Superior communication skills in working with technical and non-technical users and the ability to cultivate and maintain collaborative relationships among all levels of an organization. Bachelors degree in computer science, Mathematics, Engineering, Data Science, or Statistics preferred.

Posted 3 months ago

Apply

7 - 12 years

35 - 40 Lacs

Hyderabad

Work from Office

Naukri logo

Looking for 8+ years Python+Azure/Aws cloud is mandatory 1st round- virtual 2nd round- F2F Must have 7+ years of relevant experience- should have hands-on experience with ETL/ELT processes, cloud-based data solutions, and big data technologies.

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies