Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 11.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Data Engineering Associate Advisor - HIH - Evernorth Position Summary: Data Engineering Advisor demonstrates expertise in data engineering technologies with the focus on engineering, innovation, strategic influence and product mindset. This individual will act as key contributor of the team to design, build, test and deliver large-scale software applications, systems, platforms, services or technologies in the data engineering space. This individual will have the opportunity to work directly with partner IT and business teams, owning and driving major deliverables across all aspects of software delivery. The candidate will play a key role in automating the processes on Databricks and AWS. They collaborate with business and technology partners in gathering requirements, develop and implement. The individual must have strong analytical and technical skills coupled with the ability to positively influence on delivery of data engineering products. The applicant will be working in a team that demands innovation, cloud-first, self-service-first, and automation-first mindset coupled with technical excellence. The applicant will be working with internal and external stakeholders and customers for building solutions as part of Enterprise Data Engineering and will need to demonstrate very strong technical and communication skills. Delivery – Intermediate delivery skills including the ability to deliver work at a steady, predictable pace to achieve commitments, decompose work assignments into small batch releases and contribute to tradeoff and negotiation discussions. Domain Expertise – Demonstrated track record of domain expertise including the ability to understand technical concepts necessary to do the job effectively, demonstrate willingness, cooperation, and concern for business issues and possess in-depth knowledge of immediate systems worked on. Problem Solving – Proven problem solving skills including debugging skills, allowing you to determine source of issues in unfamiliar code or systems and the ability to recognize and solve repetitive problems rather than working around them, recognize mistakes using them as learning opportunities and break down large problems into smaller, more manageable ones & Responsibilities : The candidate will be responsible to deliver business needs end to end from requirements to development into production. Through hands-on engineering approach in Databricks environment, this individual will deliver data engineering toolchains, platform capabilities and reusable patterns. The applicant will be responsible to follow software engineering best practices with an automation first approach and continuous learning and improvement mindset. The applicant will ensure adherence to enterprise architecture direction and architectural standards. The applicant should be able to collaborate in a high-performing team environment, and an ability to influence and be influenced by others. Experience Required: 8 to 11 years of experience in software engineering, building data engineering pipelines, middleware and API development and automation More than 3 years of experience in Databricks within an AWS environment Data Engineering experience Experience Desired: Expertise in Agile software development principles and patterns Expertise in building streaming, batch and event-driven architectures and data pipelines Primary Skills: Expertise in Big data technologies such as Spark, Hadoop, Databricks, Snowflake, EMR, Glue Good understanding of Kafka, Kafka Streams, Spark Structured streaming, configuration-driven data transformation and curation Expertise in building cloud-native microservices, containers, Kubernetes and platform-as-a-service technologies such as OpenShift, CloudFoundry Experience in multi-cloud software-as-a-service products such as Databricks, Snowflake Experience in Infrastructure-as-Code (IaC) tools such as terraform and AWS cloudformation Experience in messaging systems such as Apache ActiveMQ, WebSphere MQ, Apache Artemis, Kafka, AWS SNS Experience in API and microservices stack such as Spring Boot, Quarkus, Expertise in Cloud technologies such as AWS Glue, Lambda, S3, Elastic Search, API Gateway, CloudFront Experience with one or more of the following programming and scripting languages – Python, Scala, JVM-based languages, or JavaScript, and ability to pick up new languages Experience in building CI/CD pipelines using Jenkins, Github Actions Strong expertise with source code management and its best practices Proficient in self-testing of applications, unit testing and use of mock frameworks, test-driven development (TDD) Knowledge on Behavioral Driven Development (BDD) approach Additional Skills: Cloud-based security principles and protocols like OAuth2, JWT, data encryption, hashing data, secret management, etc Ability to perform detailed analysis of business problems and technical environments Strong oral and written communication skills Ability to think strategically, implement iteratively and estimate financial impact of design/architecture alternatives Continuous focus on an on-going learning and developmen About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 3 weeks ago
8.0 - 13.0 years
3 - 6 Lacs
Bengaluru
Work from Office
Location Bengaluru : We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have extensive experience in data engineering, with a strong focus on Databricks, Python, and SQL. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining our data infrastructure to support various business needs. Key Responsibilities Develop and implement efficient data pipelines and ETL processes to migrate and manage client, investment, and accounting data in Databricks Work closely with the investment management team to understand data structures and business requirements, ensuring data accuracy and quality. Monitor and troubleshoot data pipelines, ensuring high availability and reliability of data systems. Optimize database performance by designing scalable and cost-effective solutions. What s on offer Competitive salary and benefits package. Opportunities for professional growth and development. A collaborative and inclusive work environment. The chance to work on impactful projects with a talented team. Candidate Profile Experience: 8+ years of experience in data engineering or a similar role. Proficiency in Apache Spark. Databricks Data Cloud, including schema design, data partitioning, and query optimization Exposure to Azure. Exposure to Streaming technologies. (e.g Autoloader, DLT Streaming) Advanced SQL, data modeling skills and data warehousing concepts tailored to investment management data (e.g., transaction, accounting, portfolio data, reference data etc). Experience with ETL/ELT tools like snap logic and programming languages (e.g., Python, Scala, R programing). Familiarity workload automation and job scheduling tool such as Control M. Familiar with data governance frameworks and security protocols. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Education Bachelor s degree in computer science, IT, or a related discipline. Not Ready to Apply Join our talent pool and we'll reach out when a job fits your skills.
Posted 3 weeks ago
6.0 - 11.0 years
0 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Technical Two or more years of API Development experience (specifically Rest APIs using Java, Spring boot, Hibernate) Two or more years of Data Engineering and the respective tools and technologies (e.g., Apache Spark, Databricks, SQL DB, NoSQL DB, Data Lake concepts) Working knowledge of Test-driven development Working knowledge of experience leveraging DevOps and lean development principles such as Continuous Integration, Continuous Delivery/Deployment using tools like Git Working knowledge of ETL, Data Modeling, Data Warehousing, and working with large-scale datasets Working Knowledge of AWS services such as Lambda, RDS, ECS, DynamoDB, API Gateway, S3 etc. Good to have: AWS Developer certification or working experience in AWS cloud or other cloud technologies Passionate, creative and have the desire to learn new complex technical areas Accountable, curious, and collaborative with an intense focus on product quality Skilled in interpersonal communications and ability to communicate complex topics to non-technical audiences Experience working in an agile team environment
Posted 3 weeks ago
4.0 - 9.0 years
9 - 13 Lacs
Bengaluru
Work from Office
About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII: At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Team Overview: Every time a guest enters a Target store or browses Target.com nor the app, they experience the impact of Target s investments in technology and innovation. We re the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Join our global in-house technology team of more than 5,000 of engineers, data scientists, architects and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guests and we do so with a focus on diversity and inclusion, experimentation and continuous learning. At Target, we are gearing up for exponential growth and continuously expanding our guest experience. To support this expansion, Data Engineering is building robust warehouses and enhancing existing datasets to meet business needs across the enterprise. We are looking for talented individuals who are passionate about innovative technology, data warehousing and are eager to contribute to data engineering. . Position Overview Assess client needs and convert business requirements into business intelligence (BI) solutions roadmap relating to complex issues involving long-term or multi-work streams. Analyze technical issues and questions identifying data needs and delivery mechanisms Implement data structures using best practices in data modeling, ETL/ELT processes, Spark, Scala, SQL, database, and OLAP technologies Manage overall development cycle, driving best practices and ensuring development of high quality code for common assets and framework components Develop test-driven solutions and provide technical guidance and heavily contribute to a team of high caliber Data Engineers by developing test-driven solutions and BI Applications that can be deployed quickly and in an automated fashion. Manage and execute against agile plans and set deadlines based on client, business, and technical requirements Drive resolution of technology roadblocks including code, infrastructure, build, deployment, and operations Ensure all code adheres to development & security standards About you 4 year degree or equivalent experience 5+ years of software development experience preferably in data engineering/Hadoop development (Hive, Spark etc.) Hands on Experience in Object Oriented or functional programming such as Scala / Java / Python Knowledge or experience with a variety of database technologies (Postgres, Cassandra, SQL Server) Knowledge with design of data integration using API and streaming technologies (Kafka) as well as ETL and other data Integration patterns Experience with cloud platforms like Google Cloud, AWS, or Azure. Hands on Experience on BigQuery will be an added advantage Good understanding of distributed storage(HDFS, Google Cloud Storage, Amazon S3) and processing(Spark, Google Dataproc, Amazon EMR or Databricks) Experience with CI/CD toolchain (Drone, Jenkins, Vela, Kubernetes) a plus Familiarity with data warehousing concepts and technologies. Maintains technical knowledge within areas of expertise Constant learner and team player who enjoys solving tech challenges with global team. Hands on experience in building complex data pipelines and flow optimizations Be able to understand the data, draw insights and make recommendations and be able to identify any data quality issues upfront Experience with test-driven development and software test automation Follow best coding practices & engineering guidelines as prescribed Strong written and verbal communication skills with the ability to present complex technical information in a clear and concise manner to variety of audiences Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 3 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
India, Bengaluru
Work from Office
Senior Data Engineer India, Bengaluru Get to know Okta Okta is The World s Identity Company. We free everyone to safely use any technology anywhere, on any device or app. Our Workforce and Customer Identity Clouds enable secure yet flexible access, authentication, and automation that transforms how people move through the digital world, putting Identity at the heart of business security and growth. At Okta, we celebrate a variety of perspectives and experiences. We are not looking for someone who checks every single box - we re looking for lifelong learners and people who can make us better with their unique experiences. Join our team! We re building a world where Identity belongs to you. Senior Data Engineer - Enterprise Data Platform Get to know Data Engineering Okta s Business Operations team is on a mission to accelerate Okta s scale and growth. We bring world-class business acumen and technology expertise to every interaction. We also drive cross-functional collaboration and are focused on delivering measurable business outcomes. Business Operations strives to deliver amazing technology experiences for our employees, and ensure that our offices have all the technology that is needed for the future of work. The Data Engineering team is focused on building platforms and capabilities that are utilized across the organization by sales, marketing, engineering, finance, product, and operations. The ideal candidate will have a strong engineering background with the ability to tie engineering initiatives to business impact. You will be part of a team doing detailed technical designs, development, and implementation of applications using cutting-edge technology stacks. The Senior Data Engineer Opportunity A Senior Data Engineer is responsible for designing, building, and maintaining scalable solutions. This role involves collaborating with data engineers, analysts, scientists and other engineers to ensure data availability, integrity, and security. The ideal candidate will have a strong background in cloud platforms, data warehousing, infrastructure as code, and continuous integration/continuous deployment (CI/CD) practices. What you ll be doing: Design, develop, and maintain scalable data platforms using AWS, Snowflake, dbt, and Databricks. Use Terraform to manage infrastructure as code, ensuring consistent and reproducible environments. Develop and maintain CI/CD pipelines for data platform applications using GitHub and GitLab. Troubleshoot and resolve issues related to data infrastructure and workflows. Containerize applications and services using Docker to ensure portability and scalability. Conduct vulnerability scans and apply necessary patches to ensure the security and integrity of the data platform. Work with data engineers to design and implement Secure Development Lifecycle practices and security tooling (DAST, SAST, SCA, Secret Scanning) into automated CI/CD pipelines. Ensure data security and compliance with industry standards and regulations. Stay updated with the latest trends and technologies in data engineering and cloud platforms. What we are looking for: BS in Computer Science, Engineering or another quantitative field of study 5+ years in a data engineering role 5+ years experience working with SQL, ETL tools such as Airflow and dbt, with relational and columnar MPP databases like Snowflake or Redshift, hands-on experience with AWS (e.g., S3, Lambda, EMR, EC2, EKS) 2+ years of experience managing CI/CD infrastructures, with strong proficiency in tools like GitHub Actions, Jenkins, ArgoCD, GitLab, or any CI/CD tool to streamline deployment pipelines and ensure efficient software delivery. 2+ years of experience with Java, Python, Go, or similar backend languages. Experience with Terraform for infrastructure as code. Experience with Docker and containerization technologies. Experience working with lakehouse architectures such as Databricks and file formats like Iceberg and Delta Experience in designing, building, and managing complex deployment pipelines. "This role requires in-person onboarding and travel to our Bengaluru, IN office during the first week of employment." What you can look forward to as a Full-Time Okta employee! Amazing Benefits Making Social Impact Developing Talent and Fostering Connection + Community at Okta Okta cultivates a dynamic work environment, providing the best tools, technology and benefits to empower our employees to work productively in a setting that best and uniquely suits their needs. Each organization is unique in the degree of flexibility and mobility in which they work so that all employees are enabled to be their most creative and successful versions of themselves, regardless of where they live. Find your place at Okta today! https://www.okta.com/company/careers/ . Some roles may require travel to one of our office locations for in-person onboarding. Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws. If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation. Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Privacy Policy at https://www.okta.com/privacy-policy/ . U.S. Equal Opportunity Employment Information Read more Individuals seeking employment at this company are considered without regards to race, color, religion, national origin, age, sex, marital status, ancestry, physical or mental disability, veteran status, gender identity, or sexual orientation. When submitting your application above, you are being given the opportunity to provide information about your race/ethnicity, gender, and veteran status. Completion of the form is entirely voluntary . Whatever your decision, it will not be considered in the hiring process or thereafter. Any information that you do provide will be recorded and maintained in a confidential file. If you believe you belong to any of the categories of protected veterans listed below, please indicate by making the appropriate selection. As a government contractor subject to Vietnam Era Veterans Readjustment Assistance Act (VEVRAA), we request this information in order to measure the effectiveness of the outreach and positive recruitment efforts we undertake pursuant to VEVRAA. Classification of protected categories is as follows: A "disabled veteran" is one of the followinga veteran of the U.S. military, ground, naval or air service who is entitled to compensation (or who but for the receipt of military retired pay would be entitled to compensation) under laws administered by the Secretary of Veterans Affairs; or a person who was discharged or released from active duty because of a service-connected disability. A "recently separated veteran" means any veteran during the three-year period beginning on the date of such veteran's discharge or release from active duty in the U.S. military, ground, naval, or air service. An "active duty wartime or campaign badge veteran" means a veteran who served on active duty in the U.S. military, ground, naval or air service during a war, or in a campaign or expedition for which a campaign badge has been authorized under the laws administered by the Department of Defense. An "Armed forces service medal veteran" means a veteran who, while serving on active duty in the U.S. military, ground, naval or air service, participated in a United States military operation for which an Armed Forces service medal was awarded pursuant to Executive Order 12985. Pay Transparency Okta complies with all applicable federal, state, and local pay transparency rules. For additional information about the federal requirements, click here . Voluntary Self-Identification of Disability Form CC-305 Page 1 of 1 OMB Control Number 1250-0005 Expires 04/30/2026 Why are you being asked to complete this form We are a federal contractor or subcontractor. The law requires us to provide equal employment opportunity to qualified people with disabilities. We have a goal of having at least 7% of our workers as people with disabilities. The law says we must measure our progress towards this goal. To do this, we must ask applicants and employees if they have a disability or have ever had one. People can become disabled, so we need to ask this question at least every five years. Completing this form is voluntary, and we hope that you will choose to do so. Your answer is confidential. No one who makes hiring decisions will see it. Your decision to complete the form and your answer will not harm you in any way. If you want to learn more about the law or this form, visit the U.S. Department of Labor's Office of Federal Contract Compliance Programs (OFCCP) website at www.dol.gov/ofccp. Completing this form is voluntary, and we hope that you will choose to do so. Your answer is confidential. No one who makes hiring decisions will see it. Your decision to complete the form and your answer will not harm you in any way. If you want to learn more about the law or this form, visit the U.S. Department of Labor s Office of Federal Contract Compliance Programs (OFCCP) website at www.dol.gov/agencies/ofccp . How do you know if you have a disability A disability is a condition that substantially limits one or more of your major life activities. If you have or have ever had such a condition, you are a person with a disability. Disabilities include, but are not limited to: Alcohol or other substance use disorder (not currently using drugs illegally) Autoimmune disorder, for example, lupus, fibromyalgia, rheumatoid arthritis, HIV/AIDS Blind or low vision Cancer (past or present) Cardiovascular or heart disease Celiac disease Cerebral palsy Deaf or serious difficulty hearing Diabetes Disfigurement, for example, disfigurement caused by burns, wounds, accidents, or congenital disorders Epilepsy or other seizure disorder Gastrointestinal disorders, for example, Crohn's Disease, irritable bowel syndrome Intellectual or developmental disability Mental health conditions, for example, depression, bipolar disorder, anxiety disorder, schizophrenia, PTSD Missing limbs or partially missing limbs Mobility impairment, benefiting from the use of a wheelchair, scooter, walker, leg brace(s) and/or other supports Nervous system condition, for example, migraine headaches, Parkinson s disease, multiple sclerosis (MS) Neurodivergence, for example, attention-deficit/hyperactivity disorder (ADHD), autism spectrum disorder, dyslexia, dyspraxia, other learning disabilities Partial or complete paralysis (any cause) Pulmonary or respiratory conditions, for example, tuberculosis, asthma, emphysema Short stature (dwarfism) Traumatic brain injury PUBLIC BURDEN STATEMENTAccording to the Paperwork Reduction Act of 1995 no persons are required to respond to a collection of information unless such collection displays a valid OMB control number. This survey should take about 5 minutes to complete. Okta The foundation for secure connections between people and technology Okta is the leading independent provider of identity for the enterprise. The Okta Identity Cloud enables organizations to securely connect the right people to the right technologies at the right time. With over 7,000 pre-built integrations to applications and infrastructure providers, Okta customers can easily and securely use the best technologies for their business. More than 19,300 organizations, including JetBlue, Nordstrom, Slack, T-Mobile, Takeda, Teach for America, and Twilio, trust Okta to help protect the identities of their workforces and customers. Follow Okta Apply
Posted 3 weeks ago
10.0 - 12.0 years
11 - 15 Lacs
Hyderabad
Work from Office
Job Information Job Opening ID ZR_2063_JOB Date Opened 17/11/2023 Industry Technology Job Type Work Experience 10-12 years Job Title Azure Data Architect City Hyderabad Province Telangana Country India Postal Code 500003 Number of Positions 4 LocationCoimbatore & Hyderabad : Key-Azure+ SQL+ ADF+ Databricks +design+ Architecture( Mandate) Total experience in data management area for 10 + years with Azure cloud data platform experience Architect with Azure stack (ADLS, AALS, Azure Data Bricks, Azure Streaming Analytics Azure Data Factory, cosmos DB & Azure synapse) & mandatory expertise on Azure streaming Analytics, Data Bricks, Azure synapse, Azure cosmos DB Must have worked experience in large Azure Data platform and dealt with high volume Azure streaming Analytics Experience in designing cloud data platform architecture, designing large scale environments 5 plus Years of experience architecting and building Cloud Data Lake, specifically Azure Data Analytics technologies and architecture is desired, Enterprise Analytics Solutions, and optimising real time 'big data' data pipelines, architectures and data sets. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
5.0 - 8.0 years
3 - 6 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_2159_JOB Date Opened 14/03/2024 Industry Technology Job Type Work Experience 5-8 years Job Title Devops Engineer City Chennai Province Tamil Nadu Country India Postal Code 600004 Number of Positions 5 Mandatory Skills: Azure DevOps, CI/CD Pipelines, Kubernetes, Docker, Cloud Tech stack and ADF, Spark, Data Bricks, Jenkins, Build Java based application, Java Web, GIT, J2E. -To design and develop automated deployment arrangements by leveraging configuration management technology. -Implementing various development, testing, automation tools, and IT infrastructure. -Selecting and deploying appropriate CI/CD tools. Required Candidate profile -Implementing various development, testing, automation tools, and IT infrastructure. -Selecting and deploying appropriate CI/CD tools. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
7.0 - 9.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_2162_JOB Date Opened 15/03/2024 Industry Technology Job Type Work Experience 7-9 years Job Title Sr Data Engineer City Bangalore Province Karnataka Country India Postal Code 560004 Number of Positions 5 Mandatory Skills: Microsoft Azure, Hadoop, Spark, Databricks, Airflow, Kafka, Py spark RequirmentsExperience working with distributed technology tools for developing Batch and Streaming pipelines using. SQL, Spark, Python Airflow Scala Kafka Experience in Cloud Computing, e.g., AWS, GCP, Azure, etc. Able to quickly pick up new programming languages, technologies, and frameworks. Strong skills building positive relationships across Product and Engineering. Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders Experience with creating/ configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs, build Docker images, etc. Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture Experience working with Data platforms, including EMR, Airflow, Data bricks (Data Engineering & Delta Lake components) Experience working in Agile and Scrum development process. Experience in EMR/ EC2, Data bricks etc. Experience working with Data warehousing tools, including SQL database, Presto, and Snowflake Experience architecting data product in Streaming, Server less and Microservices Architecture and platform. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
6.0 - 10.0 years
3 - 7 Lacs
Chennai
Work from Office
Job Information Job Opening ID ZR_2199_JOB Date Opened 15/04/2024 Industry Technology Job Type Work Experience 6-10 years Job Title Sr Data Engineer City Chennai Province Tamil Nadu Country India Postal Code 600004 Number of Positions 4 Strong experience in Python Good experience in Databricks Experience working in AWS/Azure Cloud Platform. Experience working with REST APIs and services, messaging and event technologies. Experience with ETL or building Data Pipeline tools Experience with streaming platforms such as Kafka. Demonstrated experience working with large and complex data sets. Ability to document data pipeline architecture and design Experience in Airflow is nice to have To build complex Deltalake check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
5.0 - 8.0 years
5 - 9 Lacs
Mumbai
Work from Office
Job Information Job Opening ID ZR_1624_JOB Date Opened 08/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title Azure ADF & Power BI Developer City Mumbai Province Maharashtra Country India Postal Code 400001 Number of Positions 4 Roles & Responsibilities: Resource must have 5+ years of hands on experience in Azure Cloud development (ADF + DataBricks) - mandatory Strong in Azure SQL and good to have knowledge on Synapse / Analytics Experience in working on Agile Project and familiar with Scrum/SAFe ceremonies. Good communication skills - Written & Verbal Can work directly with customer Ready to work in 2nd shift Good in communication and flexible Defines, designs, develops and test software components/applications using Microsoft Azure- Data-bricks, ADF, ADL, Hive, Python, Data bricks, SparkSql, PySpark. Expertise in Azure Data Bricks, ADF, ADL, Hive, Python, Spark, PySpark Strong T-SQL skills with experience in Azure SQL DW Experience handling Structured and unstructured datasets Experience in Data Modeling and Advanced SQL techniques Experience implementing Azure Data Factory Pipelines using latest technologies and techniques. Good exposure in Application Development. The candidate should work independently with minimal supervision check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
5.0 - 8.0 years
2 - 6 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_1628_JOB Date Opened 09/12/2022 Industry Technology Job Type Work Experience 5-8 years Job Title Data Engineer City Bangalore Province Karnataka Country India Postal Code 560001 Number of Positions 4 Roles and Responsibilities: 4+ years of experience as a data developer using Python Knowledge in Spark, PySpark preferable but not mandatory Azure Cloud experience (preferred) Alternate Cloud experience is fine preferred experience in Azure platform including Azure data Lake, data Bricks, data Factory Working Knowledge on different file formats such as JSON, Parquet, CSV, etc. Familiarity with data encryption, data masking Database experience in SQL Server is preferable preferred experience in NoSQL databases like MongoDB Team player, reliable, self-motivated, and self-disciplined check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 3 weeks ago
6.0 - 11.0 years
15 - 25 Lacs
Chennai, Coimbatore, Bengaluru
Work from Office
Hi Professionals, We are looking for Data Engineer for Permanent Role Work Location: Hybrid Chennai, Coimbatore or Bangalore Experience: 6 to 11 Years Notice Period: 0 TO 15 Days or Immediate Joiner. Skills: 1. Python 2. Pyspark 3. SQL 4. Azure Data bricks 5. AWS Interested can send your resume to gowtham.veerasamy@wavicledata.com.
Posted 3 weeks ago
5.0 - 10.0 years
12 - 15 Lacs
Mumbai Suburban, Navi Mumbai, Mumbai (All Areas)
Work from Office
Candidate with 5–8 years of overall experience in software engineering, including 2-4 years of focused expertise in Data Governance practices such as data cataloguing, data lineage, data quality, data purging, data reliability, data accessibility. Required Candidate profile Strong experience with data governance tools (Ab Initio, Informatica, Atlan, Collibra, Snowflake,Databricks) Hands-on knowledge of data cataloguing, data lineage, data quality, and data access control Perks and benefits To be disclosed post interview
Posted 3 weeks ago
4.0 - 6.0 years
15 - 20 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
KPI Partners is seeking a highly skilled and experienced GenAI Engineer with a strong background in Data Engineering and Software Development to join our team. The ideal candidate will focus on enhancing our information retrieval and generation capabilities, with specific experience in Azure AI Search, data processing for RAG, multimodal data integration, and familiarity with Databricks. Key Responsibilities: Design, develop, and optimize Retrieval-Augmented Generation models to improve information retrieval and generation processes within our applications. Develop and maintain search solutions using Azure AI Search to ensure efficient and accurate information access Process and prepare data to support RAG workflows, ensuring data quality and relevance. Integrate and manage various data types (e.g., text, images) to enhance retrieval and generation capabilities. Work closely with cross-functional teams to integrate data into our existing retrieval eco-system, ensuring seamless functionality and performance. Ensure the scalability, reliability, and performance of data retrieval in production environments. Stay updated with the latest advancements in AI, ML, and data engineering to drive innovation and maintain a competitive edge. What we’re looking for: Master’s degree in Data Science or a related field is preferred. Approximately 8 years of experience in Data Science, MLOps, and Data Engineering Proven experience in AI and ML solution implementation, particularly in semiconductor manufacturing. Proficiency in Python Proven experience in data engineering and software development, with a focus on building and deploying RAG pipelines or similar information retrieval systems. Familiarity with processing multimodal data (e.g., text, images) for retrieval and generation tasks. Strong understanding of database systems (SQL and NoSQL) and data warehousing solutions. Proficiency in Azure AI, Databricks, and other relevant tools.
Posted 3 weeks ago
4.0 - 9.0 years
6 - 11 Lacs
Chennai, Bengaluru
Work from Office
Immediate hiring for Azure Data Engineers/ Lead - Hexaware Technologies Primary Skill Set - Azure Databricks, Pyspark Required Total Exp : 4 to 12yrs Location : Chennai & Bangalore only Work Mode : 5 Days work from office Shift Timing : 1 pm to 10pm Notice : Immediate & Early joiners only preferred Job Description: Primary: Azure Databricks, ADF, Pyspark/Python Must Have • 6+ Years of IT experience in Datawarehouse and ETL • Hands-on data experience on Cloud Technologies on Azure, ADF, Synapse, Pyspark/Python • Ability to understand Design, Source to target mapping (STTM) and create specifications documents • Flexibility to operate from client office locations • Able to mentor and guide junior resources, as needed Nice to Have • Any relevant certifications • Banking experience on RISK & Regulatory OR Commercial OR Credit Cards/Retail Interested candidates, Kindly share your updated resume to ramyar2@hexaware.com with below required details. Full Name: Contact No: Total Exp: Rel Exp in PLSQL: Current & Joining Location: Notice Period (If serving mention LWD): Current CTC: Expected CTC:
Posted 3 weeks ago
10.0 - 15.0 years
30 - 40 Lacs
Noida, Gurugram
Work from Office
We're hiring for Snowflake Data Architect - With Leading IT Services firm for Noida & Gurgaon. Job Summary: We are seeking a Snowflake Data Architect to design, implement, and optimize scalable data solutions using Databricks and the Azure ecosystem. The ideal candidate will have deep expertise in big data architecture, data engineering, and cloud technologies , enabling them to create robust, high-performance data pipelines and analytics solutions. Key Responsibilities: Design and develop scalable, secure, and high-performance data architectures using Snowflake, Databricks, Delta Lake, and Apache Spark . Architect ETL/ELT data pipelines to process structured and unstructured data efficiently. Implement data governance, security, and compliance frameworks across cloud-based data platforms. Optimize Spark jobs for performance, cost, and reliability. Collaborate with data engineers, analysts, and business teams to understand requirements and design appropriate solutions. Develop data lakehouse architectures leveraging Databricks and ADLS Implement machine learning and AI workflows using Databricks ML and integration with ML frameworks. Define and enforce best practices for data modeling, metadata management, and data quality . Monitor and troubleshoot Databricks clusters, job failures, and performance bottlenecks . Stay updated with the latest Databricks features, Apache Spark advancements, and cloud innovations . Required Qualifications: 10+ years of experience in data architecture, data engineering, or big data platforms . Hands-on experience with Snowflake is mandatory and experience on Databricks (including Delta Lake, Unity Catalog, DBSQL) is great-to-have, as an addition. Will work in Individual Contributor role with expertise in Apache Spark for large-scale data processing. Proficiency in Python, Scala, or SQL for data transformations. Experience with Azure and their data services (e.g., Azure Data Factory, Azure Synapse, Azure, Azure SQL Server ). Knowledge of data lakehouse architectures, data warehousing and ETL processes . Strong understanding of data security, IAM, and compliance best practices . Experience with CI/CD pipelines, Infrastructure as Code (Terraform, ARM templates, CloudFormation) . Familiarity with MLflow, Feature Store, and MLOps concepts is a plus. Strong interpersonal and communication skills If interested, please share your profile at harjeet@beanhr.com
Posted 3 weeks ago
4.0 - 9.0 years
9 - 19 Lacs
Pune
Work from Office
We are seeking a Data Engineer with strong expertise in Microsoft Fabric and Databricks to support our enterprise data platform initiatives. Role: Data Engineer Microsoft Fabric & Databricks Location: Pune/ Remote Key Responsibilities: • Develop and maintain scalable data platforms using Microsoft Fabric for BI and Databricks for real-time analytics. • Build robust data pipelines for SAP, MS Dynamics, and other cloud/on-prem sources. • Design enterprise-scale Data Lakes and integrate structured/unstructured data. • Optimize algorithms developed by data scientists and ensure platform reliability. • Collaborate with data scientists, architects, and business teams in a global environment. • Perform general administration, security, and monitoring of data platforms. Mandatory Skills: • Experience with Microsoft Fabric (Warehouse, Lakehouse, Data Factory, DataFlow Gen2, Semantic Models) and/or Databricks (Apache Spark). • Strong background in Python, SQL (Scala is a plus), and API integration. • Hands-on experience with Power BI and various database technologies (RDBMS, OLAP, Time Series). • Experience working with large datasets, preferably in an industrial or enterprise environment. • Proven skills in performance tuning, data modeling, data mining, and cloud security (Azure preferred). Nice to Have: • Knowledge of Azure data services (Storage, Networking, Billing, Security). • Experience with DevOps, agile software development, and working in international/multicultural teams. Candidate Requirements: • 4+ years of experience as a data engineer. • Bachelors or Masters degree in Computer Science, Information Systems, or related fields. • Strong problem-solving skills and a high attention to detail. • Proficiency in English (written and verbal) Please share your resume at Neesha1@damcogroup.com
Posted 3 weeks ago
6.0 - 8.0 years
9 - 10 Lacs
Gurugram, Bengaluru, Delhi / NCR
Hybrid
Job Description Key Responsibility statement: The Senior Associate, Data Engineering will support the design, build, and enablement of analytics-ready data sets within our data warehouse. To ensure that our team can deliver efficient and accurate insights. Collaborate closely with analytics team members to translate business needs into data solutions. Create and manage datasets to support analytics deliverables. Design and develop data pipelines and perform data validation and issue investigation. Work alongside stakeholders to translate requirements into technical solutions, as well as collaborating with technical teams to create, investigate, and iterate on data processes. You will report into the Senior Manager of Analytics, as part of our global Performance Marketing team. The role is a hybrid / remote role, working from one of our office locations at least 3 days per week. Your Skills & Experience: Minimum 3 years of experience working with data engineering tools, preferably Databricks. Minimum 2 years of experience with data ingestion, ETL processes, and automation. Minimum 2 years of experience designing and developing BI Dashboards, ideally Power BI Strong proficiency in SQL and Python for data manipulation and transformation. Strong problem-solving skills and the ability to diagnose data-related issues. Seeking a dynamic and motivated individual who embodies a hands-on, roll up your sleeves, go-getter mentality. Proactive in taking initiative to tackle the unknown head on. Bachelors degree Benefits of Working Here: Gender-Neutral Policy Generous parental leave and new parent transition program Employee Assistance Programs to help you in wellness and well being A Tip from the Hiring Manager: This person should be highly organized, adapt quickly to change, and thrive in a fast-paced organization. This is a job for the curious, make-things-happen kind of person. Someone who thinks like an entrepreneur and can motivate and move their team to achieve and drive impact. Interested Candidates can reach out to me at saakshib1@damcogroup.com or WhatsApp their CVs at 8780529873 Note: Candidates who are immediate joiners or can start within a week are preferred.
Posted 3 weeks ago
3.0 - 6.0 years
5 - 8 Lacs
Pune
Work from Office
Key Responsibilities: Lead the Supply Chain analytics project. Provide analytical support to one or more functional areas. Use common processes, tools, and information systems to enable supply chain analysis. Ensure data integrity of all analytics and reports. Analyze and interpret Key Performance Indicators (KPIs) to identify areas for improvement and action plans. Participate in Six Sigma and supply chain improvement projects. Use existing business systems to provide analytics and reporting that are capable and repeatable. Skills: Good command over analytical tools like PowerBI, Apps, Data bricks, SQL, etc. Previous experience executing supply chain analytics projects (automation, visualization). Project management skills desirable. Experience: Minimal to intermediate level of experience required. Additional Information: Working hours: 12 pm to 9 pm. Qualifications: College, university, or equivalent degree required. This position may require licensing for compliance with export controls or sanctions regulations. Competencies: Communicates effectively: Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Drives results: Consistently achieving results, even under tough circumstances. Global perspective: Taking a broad view when approaching issues, using a global lens. Manages complexity: Making sense of complex, high quantity, and sometimes contradictory information to effectively solve problems. Optimizes work processes: Knowing the most effective and efficient processes to get things done, with a focus on continuous improvement. Values differences: Recognizing the value that different perspectives and cultures bring to an organization.
Posted 3 weeks ago
6.0 - 11.0 years
13 - 18 Lacs
Ahmedabad
Work from Office
About the Company e.l.f. Beauty, Inc. stands with every eye, lip, face and paw. Our deep commitment to clean, cruelty free beauty at an incredible value has fueled the success of our flagship brand e.l.f. Cosmetics since 2004 and driven our portfolio expansion. Today, our multi-brand portfolio includes e.l.f. Cosmetics, e.l.f. SKIN, pioneering clean beauty brand Well People, Keys Soulcare, a groundbreaking lifestyle beauty brand created with Alicia Keys and Naturium, high-performance, biocompatible, clinically-effective and accessible skincare. In our Fiscal year 24, we had net sales of $1 Billion and our business performance has been nothing short of extraordinary with 24 consecutive quarters of net sales growth. We are the #2 mass cosmetics brand in the US and are the fastest growing mass cosmetics brand among the top 5. Our total compensation philosophy offers every full-time new hire competitive pay and benefits, bonus eligibility (200% of target over the last four fiscal years), equity, flexible time off, year-round half-day Fridays, and a hybrid 3 day in office, 2 day at home work environment. We believe the combination of our unique culture, total compensation, workplace flexibility and care for the team is unmatched across not just beauty but any industry. Visit our Career Page to learn more about our team: https://www.elfbeauty.com/work-with-us Job Summary: We’re looking for a strategic and technically strong Senior Data Architect to join our high-growth digital team. The selected person will play a critical role in shaping the company’s global data architecture and vision. The ideal candidate will lead enterprise-level architecture initiatives, collaborate with engineering and business teams, and guide a growing team of engineers and QA professionals. This role involves deep engagement across domains including Marketing, Product, Finance, and Supply Chain, with a special focus on marketing technology and commercial analytics relevant to the CPG/FMCG industry. The candidate should bring a hands-on mindset, a proven track record in designing scalable data platforms, and the ability to lead through influence. An understanding of industry-standard frameworks (e.g., TOGAF), tools like CDPs, MMM platforms, and AI-based insights generation will be a strong plus. Curiosity, communication, and architectural leadership are essential to succeed in this role. Key Responsibilities Enterprise Data Strategy: Design, define and maintain a holistic data strategy & roadmap that aligns with corporate objectives and fuels digital transformation. Ensure data architecture and products aligns with enterprise standards and best practices. Data Governance & Quality: Establish scalable governance frameworks to ensure data accuracy, privacy, security, and compliance (e.g., GDPR, CCPA). Oversee quality, security and compliance initiatives Data Architecture & Platforms: Oversee modern data infrastructure (e.g., data lakes, warehouses, streaming) with technologies like Snowflake, Databricks, AWS, and Kafka. Marketing Technology Integration: Ensure data architecture supports marketing technologies and commercial analytics platforms (e.g., CDP, MMM, ProfitSphere) tailored to the CPG/FMCG industry. Architectural Leadership: Act as a hands-on architect with the ability to lead through influence. Guide design decisions aligned with industry best practices and e.l.f.'s evolving architecture roadmap. Cross-Functional Collaboration: Partner with Marketing, Supply Chain, Finance, R&D, and IT to embed data-driven practices and deliver business impact. Lead integration of data from multiple sources to unified data warehouse. Cloud Optimization : Optimize data flows, storage for performance and scalability. Lead data migration priorities, manage metadata repositories and data dictionaries. Optimise databases and pipelines for efficiency. Manage and track quality, cataloging and observability AI/ML Enablement: Drive initiatives to operationalize predictive analytics, personalization, demand forecasting, and more using AI/ML models. Evaluate emerging data technologies and tools to improve data architecture. Team Leadership: Lead, mentor, and enable high-performing team of data engineers, analysts, and partners through influence and thought leadership. Vendor & Tooling Strategy: Manage relationships with external partners and drive evaluations of data and analytics tools. Executive Reporting: Provide regular updates and strategic recommendations to executive leadership and key stakeholders. Data Enablement : Design data models, database structures, and data integration solutions to support large volumes of data. Qualifications and Requirements Bachelor's or Master's degree in Computer Science, Information Systems, or a related field 18+ years of experience in Information Technology 8+ years of experience in data architecture, data engineering, or a related field, with a focus on large-scale, distributed systems. Strong understanding of data use cases in the CPG/FMCG sector. Experience with tools such as MMM (Marketing Mix Modeling), CDPs, ProfitSphere, or inventory analytics preferred. Awareness of architecture frameworks like TOGAF. Certifications are not mandatory, but candidates must demonstrate clear thinking and experience in applying architecture principles. Must possess excellent communication skills and a proven ability to work cross-functionally across global teams. Should be capable of leading with influence, not just execution. Knowledge of data warehousing, ETL/ELT processes, and data modeling Deep understanding of data modeling principles, including schema design and dimensional data modeling. Strong SQL development experience including SQL Queries and stored procedures Ability to architect and develop scalable data solutions, staying ahead of industry trends and integrating best practices in data engineering. Familiarity with data security and governance best practices Experience with cloud computing platforms such as Snowflake, AWS, Azure, or GCP Excellent problem-solving abilities with a focus on data analysis and interpretation. Strong communication and collaboration skills. Ability to translate complex technical concepts into actionable business strategies. Proficiency in one or more programming languages such as Python, Java, or Scala This job description is intended to describe the general nature and level of work being performed in this position. It also reflects the general details considered necessary to describe the principal functions of the job identified, and shall not be considered, as detailed description of all the work required inherent in the job. It is not an exhaustive list of responsibilities, and it is subject to changes and exceptions at the supervisors’ discretion. e.l.f. Beauty respects your privacy. Please see our Job Applicant Privacy Notice (www.elfbeauty.com/us-job-applicant-privacy-notice) for how your personal information is used and shared.
Posted 3 weeks ago
8.0 - 13.0 years
15 - 30 Lacs
Hyderabad
Remote
Position: Lead Data Engineer Experience: 7+ Years Location: Hyderabad | Chennai | Remote Summary We are seeking a Lead Data Engineer with 7+ years of experience to lead the development of ETL pipelines, data warehouse solutions, and analytics infrastructure. The ideal candidate will have strong experience in Snowflake , Azure Data Factory , dbt , and Fivetran , with a background in managing data for analytics and reporting, particularly within the healthcare domain. Responsibilities Design and develop ETL pipelines using Fivetran , dbt , and Azure Data Factory for internal and client projects involving platforms such as Azure , Salesforce , and AWS . Monitor and manage production ETL workflows and resolve operational issues proactively. Document data lineage and maintain architecture artifacts for both existing and new systems. Collaborate with QA and UAT teams to produce clear, testable mapping and design documentation. Assess and recommend data integration tools and transformation approaches. Identify opportunities for process optimization and deduplication in data workflows. Implement data quality checks in collaboration with Data Quality Analysts. Contribute to the design and development of large-scale Data Warehouses , MDM solutions , Data Lakes , and Data Vaults . Required Skills & Qualifications Bachelor's Degree in Computer Science, Software Engineering, Mathematics, or a related field. 6+ years of experience in data engineering, software development, or business analytics. 5+ years of strong hands-on SQL development experience. Proven expertise in: Snowflake Azure Data Factory (ADF) ETL tools such as Informatica , Talend , dbt , or similar. Experience in the healthcare industry , with understanding of PHI/PII requirements. Strong analytical and critical thinking skills. Excellent communication and interpersonal abilities. Proficient in scripting or programming languages such as Python , Perl , Java , or Shell scripting on Linux/Unix environments. Familiarity with BI/reporting tools like Power BI , Tableau , or Cognos . Experience with Big Data technologies such as: Snowflake (Snowpark) Apache Spark , Hadoop , MapReduce , Sqoop , Hive , Pig , HBase , Flume .
Posted 3 weeks ago
3.0 - 6.0 years
15 - 25 Lacs
Bangalore Rural, Bengaluru
Work from Office
Project:- Search team Top Skills:- Python Machine learning models Databricks AWS Handson with Python Programming is a must Must have experience with Machine learning models Databricks experience is a must AWS cloud experience is required Must have experience with ML jobs like monitoring alerts, model deployments, Integration,automation pipelines Desirable Skills Experience with ML Ops role with at least 3 or more years. Have supported ML models in production at scale. Role & Responsibilities : Monitor our support channels/queues. Monitor and troubleshoot issues reported by our automation, alerts or customers. Support online production services used for serving online ML models. Collaborate with the team to automate components and processes. Understand ML Platform capabilities and integrations end-to-end. Provide feedback to ML Platform engineers about recurrent issues or areas that can be improved based on the interaction with customers and the system. Contribute to documentation efforts.
Posted 3 weeks ago
8.0 - 12.0 years
15 - 22 Lacs
Pune, Bengaluru
Work from Office
Job Title: Senior Data Engineer Company: NAM Info Private Limited Location : Bangalore Experience : 6-8 Years Responsibilities : Develop and optimize data pipelines using Azure Databricks and PySpark. Write SQL/Advanced SQL queries for data transformation and analysis. Manage data workflows with Azure Data Factory and Azure Data Lake. Collaborate with teams to ensure high-quality, efficient data solutions. Required Skills: 6-8 years of experience in Azure Databricks and PySpark. Advanced SQL query skills. Experience with Azure cloud services, ETL processes, and data optimization. Please send profiles for this role to narasimha@nam-it.com.
Posted 3 weeks ago
4.0 - 8.0 years
15 - 30 Lacs
Bengaluru
Hybrid
We are seeking a skilled and experienced Machine Learning Engineer to join our team.The ideal candidate will have a strong background in Python and PyTorch, along with 4-8 years of experience deploying ML/AI models to production. This role requires excellent analytics skills and a good working knowledge of Databricks. You will work closely with data scientists, clinicians, software engineers, and product teams to design, build, and optimize scalable machine learning solutions. Role & responsibilities Develop, train, and optimize machine learning models using PyTorch and other ML frameworks. Deploy and maintain ML models in production environments, ensuring scalability, performance, and reliability. Utilize Databricks for data processing, model training, model deployment, and pipeline optimization. Deploy Retrieval-Augmented Generation (RAG) pipelines to production for improved AI-driven applications. Collaborate with data engineers to design and implement ETL workflows and data pipelines. Perform rigorous testing, validation, and monitoring of deployed models. Optimize model inference for low latency and high throughput applications. Work with stakeholders to translate business problems into ML solutions. Stay up to date with the latest advancements in machine learning, deep learning, and AI deployment strategies. Preferred candidate profile Proficiency in Python and ML frameworks such as PyTorch. 4-8 years of experience deploying machine learning models to production. Knowledge of MLflow for experiment tracking and model management. Strong experience with Databricks for ML development and deployment. Hands-on experience with MLOps, CI/CD pipelines, and cloud-based deployment (AWS, Azure, or GCP). Solid understanding of data structures, algorithms, and software engineering principles. Experience working with large-scale datasets and distributed computing frameworks. Experience with deploying Retrieval-Augmented Generation (RAG) pipelines to production. Excellent analytical and problem-solving skills. Strong communication skills and ability to work in a collaborative team environment. Preferred Qualifications Experience deploying models in the healthcare domain. Experience with feature engineering, data preprocessing, and model explainability. Knowledge of containerization (Docker, Kubernetes) and workflow orchestration tools Familiarity with LLMs, NLP, or reinforcement learning is a plus.
Posted 3 weeks ago
4.0 - 8.0 years
4 - 8 Lacs
Noida, Uttar Pradesh, India
On-site
Role & responsibilities 8*5 development support to customer Setup development, test and production environments in on-premise and/or cloud Act as a technical liaison between Product Owner, Support and Development team.Responsible for the overall delivery and the solution architecture of the application/ feature/ module team will be working on Guide the teams across solution conceptualization, proof of concept, effort estimation, design, development, implementation, go-live, and support phases Maintain coding quality , documentation around development, Release rollout to production, KT, Participating in technology selection and explore the current and emerging technologies and propose changes constantly as needed Write secure, clean, reusable, and well-documented code, Analyzing and documenting requirements and participating in technology selection Assessing the system architecture currently in place and working with technical team to recommend solutions to improve it Knowledge about basic security fundamentals & protocols is a plus point. Preferred candidate profile Experience Minimum of 4+ years related work experience as Data bricks & Power BI. Mandatory skill set Strong experience on below: Data ingestion: scripting like powershell, python Databricks & SQL PowerBI (DAX ) Nice to have experience on ServiceNow customization Must be skillful in Integration/API/Micro services. Strong understanding of software design patterns and principles. Secondary Skill set Nice to have experience with CI/CD, Jenkins , Unix/Linux. Strong understanding of software design patterns and principles
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane