Home
Jobs

1766 Data Engineering Jobs - Page 14

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

14 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

As a BigData Engineer at IBM you wi harness the power of data to unvei captivating stories and intricate patterns. You' contribute to data gathering, storage, and both batch and rea-time processing. Coaborating cosey with diverse teams, you' pay an important roe in deciding the most suitabe data management systems and identifying the crucia data required for insightfu anaysis. As a Data Engineer, you' tacke obstaces reated to database integration and untange compex, unstructured data sets. In this roe, your responsibiities may incude: As a Big Data Engineer, you wi deveop, maintain, evauate, and test big data soutions. You wi be invoved in data engineering activities ike creating pipeines/workfows for Source to Target and impementing soutions that tacke the cients needs Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Big Data Deveoper, Hadoop, Hive, Spark, PySpark, Strong SQL. Abiity to incorporate a variety of statistica and machine earning techniques. Basic understanding of Coud (AWS,Azure, etc). Abiity to use programming anguages ike Java, Python, Scaa, etc., to buid pipeines to extract and transform data from a repository to a data consumer Abiity to use Extract, Transform, and Load (ETL) toos and/or data integration, or federation toos to prepare and transform data as needed. Abiity to use eading edge toos such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technica and professiona experience Basic understanding or experience with predictive/prescriptive modeing skis You thrive on teamwork and have exceent verba and written communication skis. Abiity to communicate with interna and externa cients to understand and define business needs, providing anaytica soutions

Posted 1 week ago

Apply

10.0 - 11.0 years

8 - 12 Lacs

Noida

Work from Office

Naukri logo

What We’re Looking For: Solid understanding of data pipeline architecture , cloud infrastructure , and best practices in data engineering Strong grip on SQL Server , Oracle , Azure SQL , and working with APIs Skilled in data analysis – identify discrepancies, recommend fixes Proficient in at least one programming language: Python , Java , or C# Hands-on experience with Azure Data Factory (ADF) , Logic Apps , Runbooks Knowledge of PowerShell scripting and Azure environment Excellent problem-solving , analytical , and communication skills Able to collaborate effectively and manage evolving project priorities Roles and Responsibilities Senior Data Engineer - Azure & Databricks Development and maintenance of Data Pipelines, Modernisation of cloud data platform At least 8 years experience in Data Engineering space At least 4 experiences in Apache Spark/ Databricks At least 4 years of experience in Python & at least 7 years in SQL and ETL stack .

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 11 Lacs

Chennai, Malaysia, Malaysia

Work from Office

Naukri logo

Responsibilities for Data Engineer Create and maintain the optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Keep our data separated and secure across national boundaries through multiple data centers and AWS regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Qualifications for Data Engineer We are looking for a candidate with 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing big data data pipelines, architectures and data sets. Experience performing root cause analysis of internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured data sets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. A successful history of manipulating, processing and extracting value from large disconnected datasets. Working knowledge of message queuing, stream processing, and highly scalable big data data stores. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including MongoDB, Postgres and Cassandra, AWS Redshift, Snowflake Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Experience with AWS cloud services: EC2, EMR, ETL, Glue, RDS, Redshift Experience with stream-processing systems: Storm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languages: Python, Java, etc. Knowledge of data pipelines and workflow management tools like Airflo Location: Chenna, India / Kuala Lumpur, Malaysia

Posted 1 week ago

Apply

10.0 - 13.0 years

30 - 40 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking a highly skilled and experienced Data Engineering Manager to lead and grow our data platform team. The ideal candidate will have a proven track record in driving data-driven innovation, leading high-performing teams, and delivering impactful solutions. You will be responsible for setting the strategic direction for our data platform initiatives, collaborating with cross-functional teams, and fostering a culture of data-driven decision-making. What you will do Team Leadership: Recruit, hire, and develop a talented team of data scientists and machine learning engineers. Mentor and coach team members to foster professional growth and innovation. Strategic Leadership: Develop and execute the long-term data science strategy, aligning with the overall business goals. Collaborate with senior leadership to communicate the value of data science initiatives and secure necessary resources. Team Management: Oversee the end-to-end lifecycle of data engineering projects, from ideation to deployment and ongoing monitoring. Ensure projects are delivered on time, within budget, and meet the highest quality standards. Technical Expertise: Implement best practices in terms of coding, design and architecture Provide technical guidance and support to the data engineering team. What You Have 5+ years of team leadership and 10+ years of total data engineering experience working in partnership with large data sets Solid experience, developing and implementing DW Architectures, OLAP & OLTP technologies, data modeling with star/snowflake-schemas to enable self-service reporting and data lake. Hands-on deep experience with cloud data tech stack, and experience working with query engines like Redshift, EMR, AWS RDS or similar Experience building data solutions within any cloud platforms using Postgres, SQL Server, Oracle and other similar services and tools Advanced SQL and programming experience with Python and/or Spark Experience or demonstrated understanding with real-time data streaming tools like Kafka, Kinesis or any similar tools Familiarity with BI & dashboarding tools and multi-dimensional modeling Great problem-solving capabilities, troubleshooting data issues and experience in stabilizing big data systems Excellent communication and presentation skills as youll be regularly interacting with stakeholders and engineering leadership. Bachelors or master's in quantitative disciplines such as Computer Science, Computer Engineering, Analytics, Mathematics, Statistics, Information Systems, or other scientific fields. Nice to Have : Certification in one of the cloud platforms (AWS/GCP/Azure)

Posted 1 week ago

Apply

10.0 - 15.0 years

15 - 19 Lacs

Bengaluru

Work from Office

Naukri logo

Job Description We are seeking a highly skilled and experienced Data Architect to design, implement, and manage the data infrastructure As a Data Architect, you will play a key role in shaping the data strategy, ensuring data is accessible, reliable, and secure across the organization You will work closely with business stakeholders, data engineers, and analysts to develop scalable data solutions that support business intelligence, analytics, and operational needs, Key Responsibilities Design and implement effective database solutions (on-prem / cloud) and data models to store and retrieve data for various applications within FinCrime domain, Develop and maintain robust data architecture strategies aligned with business objectives, Define data standards, frameworks, and governance practices to ensure data quality and integrity, Collaborate with data engineers, software developers, and business stakeholders to integrate data systems and optimize data pipelines, Evaluate and recommend tools and technologies for data management, warehousing, and processing, Create and maintain documentation related to data models, architecture diagrams, and processes, Ensure data security and compliance with relevant regulations (e-g , GDPR, HIPAA, CCPA), Participate in capacity planning and growth forecasting for the organizations data infrastructure, Through various POCs, assess and compare multiple tooling options and deliver use-cases based on MVP model as per expectations, Requirements Experience: 10+ years of experience in data architecture, data engineering, or related roles, Proven experience with relational and NoSQL databases, Experience with FinCrime domain applications and reporting, Strong experience with ETL tools, data warehousing, and data lake solutions, Familiarity with other data technologies such as Spark, Kafka, Snowflake, Skills Strong analytical and problem-solving skills, Proficiency in data modelling tools (e-g , ER/Studio, Erwin), Excellent understanding of database management systems and data security, Knowledge of data governance, metadata management, and data lineage, Strong communication and interpersonal skills to collaborate across teams, Subject matter expertise within the FinCrime, Preferred Qualifications

Posted 1 week ago

Apply

5.0 - 9.0 years

6 - 9 Lacs

Pune

Work from Office

Naukri logo

About the Role: We are looking for a highly skilled and experienced Database Management Systems (DBMS) SDE3 + Senior Instructor to join our team This role is a perfect blend of technical leadership and mentoring Youll be contributing to cutting-edge web development projects while guiding and inspiring the next generation of software engineers If youre passionate about coding, solving complex problems, and helping others grow, this role is for you! Key Responsibilities Design and develop DBMS course content, lesson plans, and practical assignments, Updated curriculum with the latest trends in database technologies, Deliver lectures and hands-on sessions on relational models, SQL, NoSQL, normalization, and database design, Use real-world examples to enhance student understanding of database concepts, Teach advanced topics like query optimization, database security, data warehousing, and cloud databases, Create and evaluate tests, quizzes, and projects to monitor student progress, Provide constructive feedback and mentorship to support student growth, Foster an engaging and collaborative classroom environment, Assist students in resolving database-related issues during practical sessions, Guide students on career paths in database management and related fields, Share insights on industry tools such as MySQL, PostgreSQL, MongoDB, and Oracle, Organize workshops, hackathons, and webinars for hands-on experience, Collaborate with instructors and departments to integrate DBMS into interdisciplinary projects, Adapt teaching strategies to accommodate various learning styles, Qualifications & Experience Bachelors or Masters degree in Computer Science, Information Technology, or a related field, Minimum of 6-8 years of industry experience in data engineering or database management, Certifications such as Oracle DBA, Microsoft SQL Server, or AWS Certified Database Specialist are a plus, Prior experience as an instructor, trainer, or tutor is preferred, Technical Skills Required Strong proficiency in relational databases (MySQL, PostgreSQL, Oracle) and NoSQL systems (MongoDB, Cassandra), Solid knowledge of SQL, PL/SQL, or T-SQL, Skilled in database design, normalization, indexing, and performance tuning, Familiarity with cloud-based databases like AWS RDS, Azure SQL, or Google Cloud Spanner, Preferred Teaching Skills Experience using e-learning platforms such as Moodle, Blackboard, or Zoom, Strong presentation and communication skills for simplifying complex concepts, Passion for teaching, mentoring, and facilitating student success, Soft Skills Ability to motivate and engage learners across different levels, Strong problem-solving and mentoring capabilities, Committed to continuous learning and professional growth in the field of database management, Why Join Us Work with Newton School of Technology in collaboration with Ajeenkya DY Patil University and Rishihood University ? institutions at the forefront of reimagining tech education in India, Be part of an initiative that's shaping the next generation of tech leaders through industry-integrated, hands-on learning, Stay engaged with cutting-edge technologies while making a meaningful impact by mentoring and educating future professionals, Enjoy a competitive salary and a comprehensive benefits package, Thrive in a collaborative, innovative work culture based in Pune and Sonipat,

Posted 1 week ago

Apply

4.0 - 7.0 years

15 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Are you passionate about transforming data into actionable insightsJoin our team at Infineon Technologies as a Staff Engineer in Data Engineering & Analytics! In this role, you'll be at the forefront of harnessing the power of data to drive innovation and efficiency Collaborate with experts, design robust data ecosystems, and support digitalization projects If you have a strong background in data engineering, database concepts, and a flair for turning complex business needs into solutions, we want to hear from you Elevate your career with us and be part of shaping the future! Job Description In your new role you will: Identify and understand the different needs and requirements of consumers and data providers (e-g transaction processing, data ware housing, big data, AI/ML) and translate business digitalization needs to technical system requirements, Team up with our domain-, ITand process experts to assess the status quo, to capture the full value of our data and to derive target data-ecosystems based on business needs Design, build, deploy and maintain scalable and reliable data assets, pipelines and architectures, Team-up with domain ITand process experts and especially with you key users to validate the effectiveness and efficiency of the designed data solutions and contribute to their continuous improvement and to their future-proofing, Support data governance (Data Catalogue, Data Lineage, Meta Data, Data Quality, Roles and Responsibilities) and enable analytics use cases with a focus on data harmonization, connection and visualization, Drive and/or contribute to digitalization projects in cross-functional coordination with IT and business counterparts (e-g data scientists, domain experts, process owners), Act as first point of contact for data solutions in the ATV QM organization to consult and guide stakeholders to leverage the full value from data and to cascade knowledge of industry trends and technology roadmaps for the major market players (guidelines,principles, frameworks, industry standards and best practice, upcoming innovation, new features and technologies) Your Profile You are best equipped for this task if you have: A degree in Information Technology, Business Informatics, Computer Science or related field of studies, At least 5 years of relevant work experience related to Data Engineering and/or Analytics with strong data engineering focus Ability to translate complex business needs into concrete actions Excellent expertise of database concepts (e-g DWH, Hadoop/Big Data, OLAP), related query languages (e-g SQL, Scala, Java, MDX) Expertise in data virtualization (e-g Denodo) Working knowledge on the latest toolsets for data analytics, reporting and data visualization (e-g Tableau, SAP BO) as well as in Python, R and Spark is a plus Ability to work both independently and within a team #WeAreIn for driving decarbonization and digitalization, As a global leader in semiconductor solutions in power systems and IoT, Infineon enables game-changing solutions for green and efficient energy, clean and safe mobility, as well as smart and secure IoT Together, we drive innovation and customer success, while caring for our people and empowering them to reach ambitious goals Be a part of making life easier, safer and greener, Are you in We are on a journey to create the best Infineon for everyone, This means we embrace diversity and inclusion and welcome everyone for who they are At Infineon, we offer a working environment characterized by trust, openness, respect and tolerance and are committed to give all applicants and employees equal opportunities We base our recruiting decisions on the applicant?s experience and skills, Please let your recruiter know if they need to pay special attention to something in order to enable your participation in the interview process, Click here for more information about Diversity & Inclusion at Infineon,

Posted 1 week ago

Apply

0.0 - 2.0 years

2 - 5 Lacs

Gurugram

Work from Office

Naukri logo

Date: Jun 2, 2025 Company: Zelestra Location Gurugram, India About Us Zelestra (formerly Solarpack) is a multinational company fully focused on multi-technology renewables with a vertically integrated business model focused large-scale renewable projects in rapidly growing markets across Europe, North America, Latin America, Asia, and Africa Headquartered in Spain, Zelestra has more than 1000 employees worldwide and is backed by EQT, one of three largest funds in the world with $200B in assets, One solution doesnt fit all, especially in energy Were on a journey alongside our clients, assisting them in achieving their decarbonization goals We are committed to developing tailored-made solutions by analyzing power market challenges and co-creating structured products based on customer insights, One of the top 10 sellers of clean energy to corporates in the world, according to Bloomberg NEF, we are committed to tailored solutions to meet customer needs, At Zelestra we aim to be a solid and solvent company, capable of executing quality and valuable projects for the society and the environment Therefore, we maintain a firm commitment to contribute directly to the social development of the communities and markets in which we operate, not only through the creation of economic value, but also through the generation of quality employment and through the social projects we promote, In the efforts of supporting business with innovative digital solutions, we want to recruit an experienced and dynamic Engineer, Mission As Zelestra Digitalhub's Junior Data Engineer, you will be responsible for designing robust data pipelines, building and maintaining databases, and deploying cloud infrastructure on AWS and Microsoft Azure You will collaborate closely with our product manager and data scientist to ensure seamless data ingestion, model deployment, and system optimization, If you want to build a career in cloud technologies, data engineering, and have a passion for renewable energy, wed love to hear from you, Responsibilities We're looking for a Junior Data Engineer to support renewable energy operations by developing and managing data systems Youll work on: Database Development: Build and manage databases for solar and wind plant data, ensuring efficient real-time data ingestion using OPC protocols, Data Pipelines: Design scalable, robust pipelines for real-time data processing, Cloud Infrastructure: Deploy and manage cloud solutions on AWS and Azure using services like Glue, Athena, S3, Power BI, and Synapse Analytics, Model Deployment: Support the integration of machine learning models into production pipelines, System Monitoring: Maintain system health with automated monitoring tools and rapid issue resolution, Collaboration: Work closely with data scientists and product managers to align engineering work with product goals, Documentation & Reporting: Ensure clear documentation and deliver insights on system performance and optimization, Required Job Requirements Bachelors or Masters degree in Computer Science, Information Technology, or related field, 12 years of experience in software or data engineering with a focus on cloud technologies, preferibly in renewable energy data (solar/wind), Proficient in Python, with experience building data pipelines and working with SQL/NoSQL databases, Familiarity with AWS and Azure cloud platforms, preferibly with cloud certifications (AWS or Azure) and/or data engineering credentials, Exposure and Hands-on expertise with OPC client protocols in cloud-integrated systems is a plus, Strong understanding of Git/version control, and experience maintaining cloud-based infrastructure and data lakes, Excellent communication and collaboration skills, with the ability to explain technical concepts clearly, Familiarity with real-time data processing, containerization (Docker/Kubernetes), and DevOps/CI-CD practices, What We Offer Career opportunities and professional development in a growing multinational company with a team highly qualified, Flexible compensation, Full working day, Remote work 2 days a week, Zelestra celebrates the diversity of thought and experience that comes from a variety of backgrounds including, among others, gender, age, ethnicity, Our mission is to contribute to a fairer and more equitable society, JR1778 Let's co-build a carbon-free tomorrow! Visit us at zelestra energy

Posted 1 week ago

Apply

2.0 - 7.0 years

4 - 8 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Naukri logo

About BeGig BeGig is the leading tech freelancing marketplace We empower innovative, early-stage, non-tech founders to bring their visions to life by connecting them with top-tier freelance talent By joining BeGig, youre not just taking on one role?youre signing up for a platform that will continuously match you with high-impact opportunities tailored to your expertise, Your Opportunit yJoin our network as a Data Engineer and work directly with visionary startups to design, build, and optimize data pipelines and systems Youll help transform raw data into actionable insights, ensuring that data flows seamlessly across the organization to support informed decision-making Enjoy the freedom to structure your engagement on an hourly or project basis?all while working remotely Role Overvi ewAs a Data Engineer, you wil l:Design & Develop Data Pipelines: Build and maintain scalable, robust data pipelines that power analytics and machine learning initiative Optimize Data Infrastructure: Ensure data is processed efficiently, securely, and in a timely manne Collaborate & Innovate: Work closely with data scientists, analysts, and other stakeholders to streamline data ingestion, transformation, and storag What Youll DoData Pipeline Developme nt:Design, develop, and maintain end-to-end data pipelines using modern data engineering tools and framewor Automate data ingestion, transformation, and loading processes across various data sourc Implement data quality and validation measures to ensure accuracy and reliabili Infrastructure & Optimizat ion:Optimize data workflows for performance and scalability in cloud environments (AWS, GCP, or Azu re) Leverage tools such as Apache Spark, Kafka, or Airflow for data processing and orchestrat ion Monitor and troubleshoot pipeline issues, ensuring smooth data operati ons, Technical Requirements & S killsExperience: 3+ years in data engineering or a related f ield Programming: Proficiency in Python, SQL, and familiarity with Scala or Java is a plus Data Platforms: Experience with big data technologies like Hadoop, Spark, or sim ilar Cloud: Working knowledge of cloud-based data solutions (e-g , AWS Redshift, BigQuery, or Azure Data L ake) ETL & Data Warehousing: Hands-on experience with ETL processes and data warehousing solut ions Tools: Familiarity with data orchestration tools such as Apache Airflow or sim ilar Database Systems: Experience with both relational (PostgreSQL, MySQL) and NoSQL datab ases, What Were Looki ng ForA detail-oriented data engineer with a passion for building efficient, scalable data sy stems A proactive problem-solver who thrives in a fast-paced, dynamic enviro nment A freelancer with excellent communication skills and the ability to collaborate with cross-functional teams, Why J oin UsImmediate Impact: Tackle challenging data problems that drive real business ou tcomes Remote & Flexible: Work from anywhere with engagements structured to suit your sc hedule Future Opportunities: Leverage BeGigs platform to secure additional data-focused roles as your expertise grows Innovative Work: Collaborate with startups at the forefront of data innovation and tech nology, Ready to Transfo rm DataApply now to become a key Data Engineer for our client and a valued member of the BeGig network!

Posted 1 week ago

Apply

0.0 - 4.0 years

13 - 17 Lacs

Gurugram

Work from Office

Naukri logo

Company Profile: Bain & Company is one of the top management consulting firms in the world In 2004, the firm established its presence in the Indian market by opening the Bain Capability Center in Gurugram, now renamed as Bain Capability Network (BCN) The BCN plays a critical role in supporting Bain's case teams and initiatives globally to help with analytics and research across all industries and capabilities, Team Summary: This position is based in BCN's Gurgaon office and is a vital part of the Data and Technology team The Benchmarking team, part of the Data & Insights Industries CoE, plays a critical role in advancing Bain's expertise in the consumer products sector Our team is responsible for building and maintaining comprehensive databases that capture key industry benchmarks and performance metrics These data assets empower Bain's consultants to deliver unparalleled insights and drive value for clients in the consumer products industry By staying ahead of industry trends and leveraging advanced data analytics, the Consumer Products Benchmarking team supports Bains commitment to delivering top-tier strategic guidance to its clients Over time, we have developed seamless solutions and utilize powerful, dynamic visualizations and charts on various platforms to showcase our results A key arm of the team is also involved in creating critical intellectual property (IP) that is valuable for new CPG cases globally at Bain, Position Summary: The person in this role will need to: Translate business objectives into data and analytics solutions and translate results into business insights using appropriate data engineering, analytics, visualization & Gen AI applications Leverage Gen AI skills to design and create repeatable analytical solutions to improve data quality Design, build, and deploy machine learning models using Scikit-Learn for various predictive analytics tasks, Implement and fine-tune NLP models with Hugging Face to address complex language processing challenges Collaborate with engineering team members on design requirements to turn PoC methods into repeatable data pipelines; work with Practice team to develop repeatable and scalable products Assist with creation and documentation of standard operating procedures for repeated data processes, as well as knowledge base of data methods Keep abreast of new developments in AI/ML technologies and best practices in data science, particularly in LLMs and generative AI, Will be a technical lead and might lead a small team, and manage their day-to-day activities and provide status update to PIT Required Experience: A Bachelors or Masters degree in in Computer Science, Artificial Intelligence, Applied Mathematics, Econometrics, Statistics, Physics, Market Research, or related field is preferred 3+ years of experiencewith data science & data engineering with Hands-on experience with AI/GenAI tools Experience designing and developing RESTful and GraphQL APIs to facilitate data access and integration, Proficiency with data wrangling in either R or Python is required Familiarity with MLOps practices for model lifecycle management Experience with Git and modern software development workflow is a plus Experience with containerization such as Docker/Kubernetes is a plus Agile way of working and tools (Jira, Confluence, Miro) Strong interpersonal and communication skills are a must Experience with Microsoft Office suite (Word, Excel, PowerPoint, Teams) is preferred Ability to explain and discuss Gen AI and Data Engineering technicalities to a business audience

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Pune

Work from Office

Naukri logo

Job Title: Data Engineer Data Solutions Delivery + Data Catalog & Quality Engineer About Advanced Energy Advanced Energy Industries, Inc (NASDAQ: AEIS), enables design breakthroughs and drives growth for leading semiconductor and industrial customers Our precision power and control technologies, along with our applications know-how, inspire close partnerships and innovation in thin-film and industrial manufacturing We are proud of our rich heritage, award-winning technologies, and we value the talents and contributions of all Advanced Energy's employees worldwide, Department: Data and Analytics Team: Data Solutions Delivery Team Job Summary: We are seeking a highly skilled Data Engineer to join our Data and Analytics team As a member of the Data Solutions Delivery team, you will be responsible for designing, building, and maintaining scalable data solutions The ideal candidate should have extensive knowledge of Databricks, Azure Data Factory, and Google Cloud, along with strong data warehousing skills from data ingestion to reporting Familiarity with the manufacturing and supply chain domains is highly desirable Additionally, the candidate should be well-versed in data engineering, data product, data platform concepts, data mesh, medallion architecture, and establishing enterprise data catalogs using tools like Ataccama, Collibra, or Microsoft Purview The candidate should also have proven experience in implementing data quality practices using tools like Great Expectations, Deequ, etc Key Responsibilities: Design, build, and maintain scalable data solutions using Databricks, ADF, and Google Cloud, Develop and implement data warehousing solutions, including ETL processes, data modeling, and reporting, Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions, Ensure data integrity, quality, and security across all data platforms, Provide expertise in data engineering, data product, and data platform concepts, Implement data mesh principles and medallion architecture to build scalable data platforms, Establish and maintain enterprise data catalogs using tools like Ataccama, Collibra, or Microsoft Purview, Implement data quality practices using tools like Great Expectations, Deequ, etc Work closely with the manufacturing and supply chain teams to understand domain-specific data requirements, Develop and maintain documentation for data solutions, data flows, and data models, Act as an individual contributor, picking up tasks from technical solution documents and delivering high-quality results, Qualifications: Bachelors degree in computer science, Information Technology, or a related field, Proven experience as a Data Engineer or similar role, In-depth knowledge of Databricks, Azure Data Factory, and Google Cloud, Strong data warehousing skills, including ETL processes, data modelling, and reporting, Familiarity with manufacturing and supply chain domains, Proficiency in data engineering, data product, data platform concepts, data mesh, and medallion architecture, Experience in establishing enterprise data catalogs using tools like Ataccama, Collibra, or Microsoft Purview, Proven experience in implementing data quality practices using tools like Great Expectations, Deequ, etc Excellent problem-solving and analytical skills, Strong communication and collaboration skills, Ability to work independently and as part of a team, Preferred Qualifications: Master's degree in a related field, Experience with cloud-based data platforms and tools, Certification in Databricks, Azure, or Google Cloud, As part of our total rewards philosophy, we believe in offering and maintaining competitive compensation and benefits programs for our employees to attract and retain a talented, highly engaged workforce Our compensation programs are focused on equitable, fair pay practices including market-based base pay, an annual pay-for-performance incentive plan, we offer a strong benefits package in each of the countries in which we operate, Advanced Energy is committed to diversity in its workforce including Equal Employment Opportunity for Minorities, Females, Protected Veterans, and Individuals with Disabilities, We are committed to protecting and respecting your privacy We take your privacy seriously and will only use your personal information to administer your application in accordance with the RA No 10173 also known as the Data Privacy Act of 2012

Posted 1 week ago

Apply

5.0 - 9.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

Job Summary As a Senior Data engineer of the Active IQ Data Engineering at NetApp, you will play a crucial role in the development and maintenance of our data engineering and infrastructure You will be responsible for designing and maintaining Data Infrastructure You will be responsible for designing, building, and optimizing our data architecture, its infrastructure, as well as managing the flow of data throughout the organization You will be part of a highly skilled technical team and working closely with the team of senior software developers and technical directors You will be responsible for contributing and aligning to system level application architecture that includes high level design, coding standards, and development and testing of code The software applications you build will be used by our internal sales team, partners, and customers, This position requires an individual to be creative, team-oriented, technology savvy, driven to produce results and demonstrates the ability to work across teams, Job Requirements You design, develop, and maintain our real time data processing, data LakeHouse infrastructure, You develop and maintain Ansible playbooks for infrastructure configuration and management You develop and maintain Kubernetes manifests, Helm charts, and other deployment artifacts You have hands on experience on Docker and containerisation and how to manage/prune the images in private registries, You have hands-on experience on access control in K8S cluster You monitor and troubleshoot issues related to Kubernetes clusters and containerized applications You drive initiatives to containerize standalone apps to be containerized in Kubernetes, You design, develop, and maintain our Kafka infrastructure, including topics, connectors, and brokers You develop and maintain infrastructure as code (IaC) and collaborate with other teams to ensure consistent infrastructure management across the organization, You use observability tools to do ?capacity management? of our services and infrastructure resources, You have a strong understanding of concepts related to computer architecture, data structures and programming practices You have a deep understanding of object-oriented programming and design You work on multiple tasks and responsibilities that will contribute towards team, program, and company goals Test Automation, Experience in AWS ECS and EKS is an added advantage Education 8 to 12 years of relevant experience, A Bachelor of Science Degree in Computer Science or masters degree is required, At NetApp, we embrace a hybrid working environment designed to strengthen connection, collaboration, and culture for all employees This means that most roles will have some level of in-office and/or in-person expectations, which will be shared during the recruitment process, Equal Opportunity Employer NetApp is firmly committed to Equal Employment Opportunity (EEO) and to compliance with all laws that prohibit employment discrimination based on age, race, color, gender, sexual orientation, gender identity, national origin, religion, disability or genetic information, pregnancy, and any protected classification, Why NetApp We are all about helping customers turn challenges into business opportunity It starts with bringing new thinking to age-old problems, like how to use data most effectively to run better but also to innovate We tailor our approach to the customer's unique needs with a combination of fresh thinking and proven approaches, We enable a healthy work-life balance Our volunteer time off program is best in class, offering employees 40 hours of paid time off each year to volunteer with their favourite organizations We provide comprehensive benefits, including health care, life and accident plans, emotional support resources for you and your family, legal services, and financial savings programs to help you plan for your future We support professional and personal growth through educational assistance and provide access to various discounts and perks to enhance your overall quality of life, If you want to help us build knowledge and solve big problems, let's talk,

Posted 1 week ago

Apply

7.0 - 12.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Conduct technical analyses of existing data pipelines, ETL processes, and on-premises/cloud system, identify technical bottlenecks, evaluate migration complexities, and propose optimizations. Desired Skills and experience Candidates should have a B.E./B.Tech/MCA/MBA in Finance, Information Systems, Computer Science or a related field 7+ years of experience as a Data and Cloud architecture with client stakeholders Strong experience in Synapse Analytics, Databricks, ADF, Azure SQL (DW/DB), SSIS. Strong experience in Advanced PS, Batch Scripting, C# (.NET 3.0). Expertise on Orchestration systems with ActiveBatch and AZ orchestration tools. Strong understanding of data warehousing, DLs, and Lakehouse concepts. Excellent communication skills, both written and verbal Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the needs for close supervision and collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Experience in project management and team management Key responsibilities include: Understand and review PowerShell (PS), SSIS, Batch Scripts, and C# (.NET 3.0) codebases for data processes. Assess the complexity of trigger migration across Active Batch (AB), Synapse, ADF, and Azure Databricks (ADB). Define usage of Azure SQL DW, SQL DB, and Data Lake (DL) for various workloads, proposing transitions where beneficial. Analyze data patterns for optimization, including direct raw-to-consumption loading and zone elimination (e.g., stg/app zones). Understand requirements for external tables (Lakehouse) Lead project deliverables, ensuring actionable and strategic outputs. Evaluate and ensure quality of deliverables within project timelines Develop a strong understanding of equity market domain knowledge Collaborate with domain experts and business stakeholders to understand business rules/logics Ensure effective, efficient, and continuous communication (written and verbally) with global stakeholders Independently troubleshoot difficult and complex issues on dev, test, UAT and production environments Responsible for end-to-end delivery of projects, coordination between client and internal offshore teams and manage client queries Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)

Posted 1 week ago

Apply

8.0 - 10.0 years

11 - 15 Lacs

Gurugram

Work from Office

Naukri logo

Design, construct, and maintain scalable data management systems using Azure Databricks, ensuring they meet end-user expectations. Supervise the upkeep of existing data infrastructure workflows to ensure continuous service delivery. Create data processing pipelines utilizing Databricks Notebooks, Spark SQL, Python and other Databricks tools. Oversee and lead the module through planning, estimation, implementation, monitoring and tracking. Desired Skills and Experience Over 8 + years of experience in data engineering, with expertise in Azure Data Bricks, MSSQL, LakeFlow, Python and supporting Azure technology. Design, build, test, and maintain highly scalable data management systems using Azure Databricks. Create data processing pipelines utilizing Databricks Notebooks, Spark SQL. Integrate Azure Databricks with other Azure services like Azure Data Lake Storage, Azure SQL Data Warehouse. Design and implement robust ETL pipelines using ADF and Databricks, ensuring data quality and integrity. Collaborate with data architects to implement effective data models and schemas within the Databricks environment. Develop and optimize PySpark/Python code for data processing tasks. Assist stakeholders with data-related technical issues and support their data infrastructure needs. Develop and maintain documentation for data pipeline architecture, development processes, and data governance. Data Warehousing: In-depth knowledge of data warehousing concepts, architecture, and implementation, including experience with various data warehouse platforms. Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients. Excellent problem-solving skills, with ability to work independently or as part of team. Strong communication and interpersonal skills, with ability to effectively engage with both technical and non-technical stakeholders. Able to work independently without the needs for close supervision and collaboratively as part of cross-team efforts. Key Responsibilities Interpret business requirements, either gathered or acquired. Work with internal resources as well as application vendors Designing, developing, and maintaining Data Bricks Solution and Relevant Data Quality rules Troubleshooting and resolving data related issues. Configuring and Creating Data models and Data Quality Rules to meet the needs of the customers. Hands on in handling Multiple Database platforms. Like Microsoft SQL Server, Oracle etc Reviewing and analyzing data from multiple internal and external sources Analyse existing PySpark/Python code and identify areas for optimization. Write new optimized SQL queries or Python Scripts to improve performance and reduce run time. Identify opportunities for efficiencies and innovative approaches to completing scope of work. Write clean, efficient, and well-documented code that adheres to best practices and Council IT coding standards. Maintenance and operation of existing custom codes processes Participate in team problem solving efforts and offer ideas to solve client issues. Query writing skills with ability to understand and implement changes to SQL functions and stored procedures. Effectively communicate with business and technology partners, peers and stakeholders Ability to deliver results under demanding timelines to real-world business problems. Ability to work independently and multi-task effectively. Configure system settings and options and execute unit/integration testing. Develop end-user Release Notes, training materials and deliver training to a broad user base. Identify and communicate areas for improvement Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic. Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT)

Posted 1 week ago

Apply

4.0 - 6.0 years

4 - 9 Lacs

Gurugram

Work from Office

Naukri logo

As a key member of the DTS team, you will primarily collaborate closely with a leading global hedge fund on data engagements foundation in building modern, responsive web applications using Blazor and MudBlazor. In this role, you will be instrumental in shaping the user experience of our applications, working closely with cross-functional teams to deliver high-quality, scalable, and maintainable UI components. Desired Skills and Experience Essential skills Bachelors or masters degree in computer science, Engineering, or a related field. 4-6 years of experience in data engineering, with a strong background in building and maintaining data pipelines and ETL processes. Proven experience with Blazor and MudBlazor, or strong willingness to learn. Solid understanding of modern JavaScript frameworks (e.g., React, Angular, Vue). Strong grasp of HTML, CSS, and responsive design principles. Experience working in collaborative, agile development environments. Familiarity with accessibility standards and frontend performance optimization. Experience with Razor components and .NET backend integration. Exposure to unit testing and automated UI testing tools. Key Responsibilities Build and maintain responsive, reusable UI components in Blazor and MudBlazor. Translate UI/UX mockups and business requirements into functional frontend features. Work closely with backend engineers to ensure smooth API integrations. Participate in code reviews and collaborate on frontend design patterns and best practices. Investigate and resolve UI bugs and performance issues. Contributes to maintaining consistency, accessibility, and scalability of the frontend codebase. Collaborate with QA and DevOps to support testing and deployment pipelines. Stay current with frontend trends and technologies, particularly in .NET and Blazor ecosystems. Our current stack includes C#, .NET 5+, Blazor, and the MudBlazor component library. Developers with strong experience in modern JavaScript frameworks (such as React, Angular, or Vue) and a willingness to quickly learn Blazor and Razor components. Key Metrics C#, .NET 5+, Blazor UI Library: MudBlazor Git, CI/CD, Agile/Scrum Behavioral Competencies Good communication (verbal and written) Experience in managing client stakeholders

Posted 1 week ago

Apply

3.0 - 7.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

As a member of the Data and Technology practice, you will be working on advanced AI ML engagements tailored for the investment banking sector. This includes developing and maintaining data pipelines, ensuring data quality, and enabling data-driven insights. Your core responsibility will be to build and manage scalable data infrastructure that supports our proof-of-concept initiatives (POCs) and full-scale solutions for our clients. You will work closely with data scientists, DevOps engineers, and clients to understand their data requirements, translate them into technical tasks, and develop robust data solutions. Your primary duties will encompass: Develop, optimize, and maintain scalable and reliable data pipelines using tools such as Python, SQL, and Spark. Integrate data from various sources including APIs, databases, and cloud storage solutions such as Azure, Snowflake, and Databricks. Implement data quality checks and ensure the accuracy and consistency of data. Manage and optimize data storage solutions, ensuring high performance and availability. Work closely with data scientists and DevOps engineers to ensure seamless integration of data pipelines and support machine learning model deployment. Monitor and optimize the performance of data workflows to handle large volumes of data efficiently. Create detailed documentation of data processes. Implement security best practices and ensure compliance with industry standards. Experience / Skills 5+ years of relevant experience in: Experience in a data engineering role , preferably within the financial services industry . Strong experience with data pipeline tools and frameworks such as Python, SQL, and Spark. Proficiency in cloud platforms, particularly Azure, Snowflake, and Databricks. Experience with data integration from various sources including APIs and databases. Strong understanding of data warehousing concepts and practices. Excellent problem-solving skills and attention to detail. Strong communication skills, both written and oral, with a business and technical aptitude. Additionally, desired skills: Familiarity with big data technologies and frameworks. Experience with financial datasets and understanding of investment banking metrics. Knowledge of visualization tools (e.g., PowerBI). Education Bachelors or Masters in Science or Engineering disciplines such as Computer Science, Engineering, Mathematics, Physics, etc.

Posted 1 week ago

Apply

4.0 - 6.0 years

4 - 8 Lacs

Gurugram

Work from Office

Naukri logo

As a key member of the DTS team, you will primarily collaborate closely with a global leading hedge fund on data engagements. Partner with data strategy and sourcing team on data requirements to migrate scripts from Matlab to Python. Also, work on re-creation data visualizations using Tableau/PowerBI. Desired Skills and Experience Essential skills 4-6 years of experience with data analytics Skilled in Python, PySpark, and MATLAB Working knowledge of Snowflake and SQL Hands-on experience to generate dashboards using Tableau/Power BI Experience working with financial and/or alternative data products Excellent analytical and strong problem-solving skills Working knowledge of data science concepts, regression, statistics and the associated python libraries Interest in quantitative equity investing and data analysis Familiarity with version control systems such as GIT Education: B.E./B.Tech in Computer Science or related field Key Responsibilities Re-write and enhance the existing analytics process and code from Matlab to Python Build a GUI to allow users to provide parameters for generating these reports Store the data in Snowflake tables and write queries using PySpark to extract, manipulate, and upload data as needed Re-create the existing dashboards in Tableau and Power BI Collaborate with the firms research and IT team to ensure data quality and security Engage with technical and non-technical clients as SME on data asset offerings Key Metrics Python, SQL, MATLAB, Snowflake, Pandas/PySpark Tableau, PowerBI, Data Science Behavioral Competencies Good communication (verbal and written) Experience in managing client stakeholders

Posted 1 week ago

Apply

2.0 - 5.0 years

3 - 7 Lacs

Gurugram

Work from Office

Naukri logo

We're seeking a skilled Software Engineer with expertise in C++, Python and it would be nice to have experience of working on Large Language Models (LLM) to join our team. Desired Skills and Experience Essential skills Minimum of a bachelors degree in a technical or quantitative field with strong academic background Demonstrated ability to implement data engineering pipelines and real-time applications in C++ (python is a plus) Proficiency with C++ based tools like STL, object oriented programming in C++ is a must Experience with Linux/Unix shell/ scripting languages and Git is a must Experience with python based tools like Jupyter notebook, coding standards like pep8 is a plus Strong problem-solving skills and understanding of data structures and algorithms Experience with large-scale data processing and pipeline development Understanding of various LLM frameworks and experience with prompt engineering using Python or other scripting languages Nice to have Knowledge of natural language processing (NLP) concepts, familiarity with integrating and leveraging LLM APIs for various applications Key Responsibilities Design, develop, and maintain projects using C++ and Python along with operational support Transform a wide range of structured and unstructured data into standardized outputs for quantitative analysis and financial engineering Participate in code reviews, ensure coding standards, and contribute to the improvement of the codebase Develop the utility tools that can further automate the software development, testing and deployment workflow Collaborate with internal and external cross-functional teams Key Metrics C++ Behavioral Competencies Good communication (verbal and written), critical thinking, and attention to detail

Posted 1 week ago

Apply

2.0 - 4.0 years

2 - 6 Lacs

Gurugram

Work from Office

Naukri logo

As a key member of the DTS team, you will primarily collaborate closely with a global leading hedge fund on data engagements. Partner with data strategy and sourcing team on data requirements to design data pipelines and delivery structures. Desired Skills and Experience Essential skills B.Tech/ M.Tech/ MCA with 2-4 years of overall experience. Skilled in Python and SQL. Experience with data modeling, data warehousing, and building data pipelines. Experience working with FTP, API, S3 and other distribution channels to source data. Experience working with financial and/or alternative data products. Experience working with cloud native tools for data processing and distribution. Experience with Snowflake and Airflow. Key Responsibilities Engage with vendors and technical teams to systematically ingest, evaluate, and create valuable data assets. Collaborate with core engineering team to create central capabilities to process, manage and distribute data assts at scale. Apply robust data quality rules to systemically qualify data deliveries and guarantee the integrity of financial datasets. Engage with technical and non-technical clients as SME on data asset offerings. Key Metrics Python, SQL. Snowflake Data Engineering and pipelines Behavioral Competencies Good communication (verbal and written) Experience in managing client stakeholders

Posted 1 week ago

Apply

6.0 - 8.0 years

12 - 17 Lacs

Gurugram

Work from Office

Naukri logo

As a key member of the DTS team, you will primarily collaborate closely with a global leading hedge fund on data engagements. Partner with data strategy and sourcing team on data requirements to design data pipelines and delivery structures. Desired Skills and Experience Essential skills A bachelors degree in computer science, engineering, mathematics, or statistics 6-8 years of experience in a Data Engineering role, with a proven track record of delivering insightful and value add dashboards Experience writing Advanced SQL queries, Python and a deep understanding of relational databases Experience working within an Azure environment Experience with Tableau, Holland Mountain ATLAS is a plus. Experience with master data management and data governance is a plus. Ability to prioritize multiple projects simultaneously, problem solve, and think outside the box Key Responsibilities Develop, test and release Data packages for Tableau Dashboards to support all business functions, including investments, investor relations, marketing and operations Support ad hoc requests, including the ability to write queries and extract data from a data warehouse Assist with the management and maintenance of an Azure environment Maintain a data dictionary, which includes documentation of database structures, ETL processes and reporting dependencies Key Metrics Python, SQL Data Engineering, Azure and ATLAS Behavioral Competencies Good communication (verbal and written) Experience in managing client stakeholders

Posted 1 week ago

Apply

9.0 - 14.0 years

35 - 55 Lacs

Noida

Hybrid

Naukri logo

Looking For A Better Opportunity? Join Us and Make Things Happen with DMI a Encora company now....! Encora is seeking a full-time Lead Data Engineer with Logistic domian expertise to support our manufacturing large scale client in digital transformation. The Lead Data Engineer is responsible for ensuring the day-to-day leadership and guidance of the local, India-based, data team. This role will be the primary interface with the management team of the client and will work cross functionally with various IT functions to streamline project delivery. Minimum Requirements: l 8+ years of experience overall in IT l Current - 5+ years of experience on Azure Cloud as Data Engineer l Current - 3+ years of hands-on experience on Databricks/ AzureDatabricks l Proficient in Python/PySpark l Proficient in SQL/TSQL l Proficient in Data Warehousing Concepts (ETL/ELT, Data Vault Modelling, Dimensional Modelling, SCD, CDC) Primary Skills: Azure Cloud, Databricks, Azure Data Factory, Azure Synapse Analytics, SQL/TSQL, PySpark, Python + Logistic domain expertise Work Location: Noida, India (Candidates who are open for relocation on immediate basis can also apply) Interested candidates can apply at nidhi.dubey@encora.com along with their updated resume: 1. Total experience: 2.Relevant experience in Azure Cloud: 3. Relevant experience in Azure Databricks: 4. Relevant experience in Azure Syanspse: 5. Relevant experience in SQL/T-SQL: 6. Relevant experience in Pyspark: 7. Relevant experience in python: 8. Relevant experience in logistic domain: 9. Relevant experience in data warehosuing: 10. Current CTC: 11. Expected CTC: 12. Official Notice Period. if serving please specify LWD:

Posted 1 week ago

Apply

3.0 - 6.0 years

35 - 40 Lacs

Pune, Bengaluru

Work from Office

Naukri logo

Client : Our client is a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. Requirements : Our client is looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities : - Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. - Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. - Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. - Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. - Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. - Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. - Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Desired Profile : - Bachelors or Masters degree in Computer Science, Information Technology, or a related field. - 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. - Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. - Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). - Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. - Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). - Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. - Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). - Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). - Familiarity with version control systems like Git.

Posted 1 week ago

Apply

3.0 - 5.0 years

20 - 22 Lacs

Udaipur

Work from Office

Naukri logo

3-5 years of experience in Data Engineering or similar roles Strong foundation in cloud-native data infrastructure and scalable architecture design Build and maintain reliable, scalable ETL/ELT pipelines using modern cloud-based tools Design and optimize Data Lakes and Data Warehouses for real-time and batch processing

Posted 1 week ago

Apply

2.0 - 6.0 years

2 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Role:Senior IT Recruiter Experience: 2-6yrs Required skills and qualifications • 2+ years of experience in recruitment • Exceptional communication, interpersonal, and decision-making skills Exposure to niche technologies or domains such as * SAP , Salesforce , Embedded , VLSI *DevOPs , Cloud (AWS/GCP/Azure), AI/ML *Cybersecurity , Blockchain ,Data Engineering • Expertise in the areas of Sourcing, Screening, shortlisting, interview coordination, negotiating, and follow-up. and internet search methods • Familiarity with job boards and computer systems designed specifically for HR • Proven success in conducting interviews using various methods (phone, video, email, in-person) *Ability to close critical , hard to fill roles -- Thanks & Regards Anjitha IT Recruiter Mobile- 8157970261 Website www.acesoftlabs.com Email: Anjitha.jr@acesoftlabs.com Future Ready

Posted 1 week ago

Apply

3.0 - 8.0 years

20 - 35 Lacs

Bengaluru

Hybrid

Naukri logo

Role & responsibilities Design, develop, and optimize complex SQL queries, stored procedures, and data models for Oracle-based systems Create and maintain efficient data pipelines for extract, transform, and load (ETL) processes using Informatica or Python Implement data quality controls and validation processes to ensure data integrity Collaborate with cross-functional teams to understand business requirements and translate them into technical specifications Document database designs, procedures, and configurations to support knowledge sharing and system maintenance Troubleshoot and resolve database performance issues through query optimization and indexing strategies Integrate Oracle systems with cloud services, particularly AWS S3 and related technologies Preferred candidate profile 3+ years of experience with Oracle databases, including advanced SQL & PLSQL development Strong knowledge of data modelling principles and database design Proficiency with Python for data processing and automation Experience implementing and maintaining data quality controls Experience with AI-assisted development (GH copilot, etc..) Ability to reverse engineer existing database schemas and understand complex data relationships

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies