Home
Jobs

984 Data Bricks Jobs - Page 30

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 4.0 years

10 - 14 Lacs

Pune

Work from Office

Naukri logo

Overview Design, develop, and maintain data pipelines and ETL/ELT processes using PySpark/Databricks. Optimize performance for large datasets through techniques such as partitioning, indexing, and Spark optimization. Collaborate with cross-functional teams to resolve technical issues and gather requirements. Responsibilities Ensure data quality and integrity through data validation and cleansing processes. Analyze existing SQL queries, functions, and stored procedures for performance improvements. Develop database routines like procedures, functions, and views. Participate in data migration projects and understand technologies like Delta Lake/warehouse. Debug and solve complex problems in data pipelines and processes. Qualifications Bachelor’s degree in computer science, Engineering, or a related field. Strong understanding of distributed data processing platforms like Databricks and BigQuery. Proficiency in Python, PySpark, and SQL programming languages. Experience with performance optimization for large datasets. Strong debugging and problem-solving skills. Fundamental knowledge of cloud services, preferably Azure or GCP. Excellent communication and teamwork skills. Nice to Have: Experience in data migration projects. Understanding of technologies like Delta Lake/warehouse. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 1 month ago

Apply

3.0 - 7.0 years

0 - 3 Lacs

Pune

Work from Office

Naukri logo

We are urgently hiring for Sr. Data Engineer for Pune location, WFO, 4+ Year experience, IITian only, must have hands on experience on Azure, Databricks, Pyspark and Python, Immediate joiners preferred

Posted 1 month ago

Apply

2.0 - 5.0 years

15 - 19 Lacs

Mumbai

Work from Office

Naukri logo

Overview The Data Technology team at MSCI is responsible for meeting the data requirements across various business areas, including Index, Analytics, and Sustainability. Our team collates data from multiple sources such as vendors (e.g., Bloomberg, Reuters), website acquisitions, and web scraping (e.g., financial news sites, company websites, exchange websites, filings). This data can be in structured or semi-structured formats. We normalize the data, perform quality checks, assign internal identifiers, and release it to downstream applications. Responsibilities As data engineers, we build scalable systems to process data in various formats and volumes, ranging from megabytes to terabytes. Our systems perform quality checks, match data across various sources, and release it in multiple formats. We leverage the latest technologies, sources, and tools to process the data. Some of the exciting technologies we work with include Snowflake, Databricks, and Apache Spark. Qualifications Core Java, Spring Boot, Apache Spark, Spring Batch, Python. Exposure to sql databases like Oracle, Mysql, Microsoft Sql is a must. Any experience/knowledge/certification on Cloud technology preferrably Microsoft Azure or Google cloud platform is good to have. Exposures to non sql databases like Neo4j or Document database is again good to have. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 1 month ago

Apply

5.0 - 8.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Naukri logo

Years Of Exp - 5-12 Yrs Location - PAN India OFSAA Data Modeler Experience in design, build ,customize OFSAA Data model ,Validation of data model Excellent knowledge in Data model guidelines for Staging. processing and Reporting tables. Knowledge on Data model Support on configuring the UDPs, subtype and supertype relationship enhancements Experience on OFSAA platform (OFSAAI) with one or more of following OFSAA modules: o OFSAA Financial Solution Data Foundation - (Preferred) o OFSA Data Integrated Hub - Optional Good in SQL and PL/SQL. Strong in Data Warehouse Principles, ETL/Data Flow tools. Should have excellent Analytical and Communication skills. OFSAA Integration SME - DIH/Batch run framework Experience in ETL process, familiar with OFSAA. DIH setup in EDS, EDD, T2T, etc. Familiar with different seeded tables, SCD, DIM, hierarchy, look ups, etc Worked with FSDF in knowing the STG, CSA, FACT table structures Have working with different APIs and out of box connectors, etc. Familiar with Oracle patching and SR

Posted 1 month ago

Apply

5.0 - 8.0 years

16 - 27 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

ML Engineer (ML Ops)Chennai / Bangalore / Hyderabad Curious about the role? What your typical day would look like? We are looking for a Machine Learning Engineer/Sr MLE who will work on a broad range of cutting-edge data analytics and machine learning problems across a variety of industries. More specifically, you will Engage with clients to understand their business context. Translate business problems and technical constraints into technical requirements for the desired analytics solution. Collaborate with a team of data scientists and engineers to embed AI and analytics into the business decision processes. What do we expect? 6+ years of experience with at least 4+ years of relevant MLOps experience . Proficient in a structured Python (Mandate) Proficient in any one of cloud technologies is mandatory ( AWS/ Azure/ GCP) Proficient in Azure Databricks Follows good software engineering practices and has an interest in building reliable and robust software. Good understanding of DS concepts and DS model lifecycle. Working knowledge of Linux or Unix environments ideally in a cloud environment. Working knowledge of Spark/ PySpark is desirable. Model deployment / model monitoring experience is desirable. CI/CD pipeline creation is good to have. Excellent written and verbal communication skills. B.Tech from Tier-1 college / M.S or M. Tech is preferred. You are important to us, lets stay connected! Every individual comes with a different set of skills and qualities so even if you don’t tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow.We are an equal-opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire. Note: The designation will be commensurate with expertise and experience. Compensation packages are among the best in the industry.Additional Benefits: Health insurance (self & family), virtual wellness platform, and knowledge communities.

Posted 1 month ago

Apply

7.0 - 12.0 years

10 - 18 Lacs

Bengaluru

Hybrid

Naukri logo

Job Goals Design and implement resilient data pipelines to ensure data reliability, accuracy, and performance. Collaborate with cross-functional teams to maintain the quality of production services and smoothly integrate data processes. Oversee the implementation of common data models and data transformation pipelines, ensuring alignement to standards. Drive continuous improvement in internal data frameworks and support the hiring process for new Data Engineers. Regularly engage with collaborators to discuss considerations and manage the impact of changes. Support architects in shaping the future of the data platform and help land new capabilities into business-as-usual operations. Identify relevant emerging trends and build compelling cases for adoption, such as tool selection. Ideal Skills & Capabilities A minimum of 6 years of experience in a comparable Data Engineer position is required. Data Engineering Expertise: Proficiency in designing and implementing resilient data pipelines, ensuring data reliability, accuracy, and performance, with practical knowledge of modern cloud data technology stacks (AZURE) Technical Proficiency: Experience with Azure Data Factory and Databricks , and skilled in Python , Apache Spark , or other distributed data programming frameworks. Operational Knowledge: In-depth understanding of data concepts, data structures, modelling techniques, and provisioning data to support varying consumption needs, along with accomplished ETL/ELT engineering skills. Automation & DevOps: Experience using DevOps toolchains for managing CI/CD and an automation-first mindset in building solutions, including self-healing and fault-tolerant methods. Data Management Principles: Practical application of data management principles such as security and data privacy, with experience handling sensitive data through techniques like anonymisation/tokenisation/pseudo-anonymisation.

Posted 1 month ago

Apply

5.0 - 10.0 years

9 - 18 Lacs

Coimbatore, Bengaluru

Hybrid

Naukri logo

Job Title : SQL Developer Location : Bangalore / Coimbatore Job Type : Full Time, Hybrid (Night Shift) Experience : 5+ years Shift Timings: US Shift 6:30pm to 3:30am Preferred : Immediate joiner or 15 days Notice About LogixHealth: At LogixHealth we provide expert coding and billing services that allow physicians to focus on providing great clinical care. LogixHealth was founded in the 1990s by physicians to service their own practices and has grown to become the nations leading provider of unsurpassed softwareenabled revenue cycle management services, offering a complete range of solutions, including coding and claims management and the latest business intelligence reporting dashboards for clients in 40 states. Since our first day, we have had a clear vision of a better healthcare system and have continually evolved to get there. In addition to providing expert revenue cycle services, we utilize proprietary software to provide valuable financial, clinical, and other data insights that directly improve the quality and efficiency of patient care. At LogixHealth, we’re committed to Making intelligence matter through our pillars of PhysicianInspired Knowledge, Unrivaled Technology and Impeccable Service. To learn more about us, visit our website https://www.logixhealth.com What we offer: At LogixHealth, we value our people and are committed to their growth and well-being. We strive to create an environment where innovation is rewarded and careers flourish. Join us and be part of something meaningful. Opportunity to shape the future of digital healthcare products Collaborative and inclusive company culture Competitive salary and performance-based incentives Professional development and training opportunities Flexible work environment with hybrid options Job Description As a SQL Developer at LogixHealth, you will work closely with a collaborative team to understand complex business logic and develop robust, high-performance SQL solutions that support critical business operations. You’ll play a key role in peer reviews, provide best practice recommendations, and contribute to process improvements. Tasks and Responsibilities Participate in peer code reviews to ensure code quality and adherence to best practices Recommend and implement improvements to database structure and processes Provide night shift support for production systems, including monitoring and troubleshooting Optimize database performance through query tuning, indexing, and partitioning Assist in deploying database changes to production and staging environments Manage database access and audit user permissions in line with security policies Troubleshoot and maintain Azure CI/CD pipelines for database deployments Write and maintain PowerShell scripts for automation, deployment, and monitoring Design, deploy, and support SSIS packages for data integration and ETL processes Required Skills and Knowledge Advanced T-SQL Development: Proficient in writing efficient queries, stored procedures, and views Dynamic SQL: Ability to implement customizable logic using dynamic SQL Performance Tuning: Expertise in analyzing execution plans, indexing, and query optimization Production Support: Capable of resolving SQL issues in a 24/7 environment, especially during night shifts Collaboration: Experience working with cross-functional teams including analysts and business users Code Review & Standards: Participates in code reviews and promotes clean, standardized code practices Security & Compliance: Familiarity with data governance, security standards, and audit practices. Preferred Skills Cloud Experience: Exposure to Azure SQL, Azure DevOps, Databricks and Airflow Scripting: Python, Bash, Powershell

Posted 1 month ago

Apply

5.0 - 7.0 years

15 - 20 Lacs

Pune

Work from Office

Naukri logo

Roles and Responsibilities: You are detailed reviewing and analyzing structured, semi-structured and unstructured data sources for quality, completeness, and business value. You design, architect, implement and test rapid prototypes that demonstrate value of the data and present them to diverse audiences. You participate in early state design and feature definition activities. Responsible for implementing robust data pipeline using Microsoft, Databricks Stack Responsible for creating reusable and scalable data pipelines. You are a Team-Player, collaborating with team members across multiple engineering teams to support the integration of proven prototypes into core intelligence products. You have strong communication skills to effectively convey complex data insights to non-technical stakeholders. Critical Skills to Possess: Skills: Advanced working knowledge and experience with relational and non-relational databases. Advanced working knowledge and experience with API data providers Experience building and optimizing Big Data pipelines, architectures, and datasets. Strong analytic skills related to working with structured and unstructured datasets. Hands-on experience in Azure Databricks utilizing Spark to develop ETL pipelines. Strong proficiency in data analysis, manipulation, and statistical modeling using tools like Spark, Python, Scala, SQL, or similar languages. Strong experience in Azure Data Lake Storage Gen2, Azure Data Factory, Databricks, Event Hub, Azure Synapse. Familiarity with several of the following technologies: Event Hub, Docker, Azure Kubernetes Service, Azure DWH, API Azure, Azure Function, Power BI, Azure Cognitive Services. Azure DevOps experience to deploy the data pipelines through CI/CD. Roles and Responsibilities Skills: Azure Databricks, Azure Datafactory, Big Data Pipelines, Pyspark, Azure Synapse, Azure DevOps, Azure Data Lake Storage Gen2, Event Hub, Azure DWH, API Azure. Experience: Minimum 5-7 years of practical experience as Data Engineer. Azure cloud stack in-production experience. Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience

Posted 1 month ago

Apply

4.0 - 7.0 years

10 - 18 Lacs

Chennai

Hybrid

Naukri logo

Skills and attributes for success Delivery of Testing needs for BI & DWH Projects. Ability to effectively communicate with team members across geographies effectively Perform unstructured data / big data testing both in on-premise and cloud platform. Thorough understanding of Requirements and provide feedback on the requirements. Develop Test Strategy for Testing BI & DWH Projects for various aspects like ETL testing & Reports testing (Front end and Backend Testing), Integration Testing and UAT as needed. Provide inputs for Test Planning aligned with Test Strategy. Perform Test Case design, identify opportunity for Test Automation. Develop Test Cases both Manual and Automation Scripts as required. Ensure Test readiness (Test Environment, Test Data, Tools Licenses etc) Perform Test execution and report the progress. Report defects and liaise with development & other relevant team for defect resolution. Prepare Test Report and provide inputs to Test Lead for Test Sign off/ Closure Provide support in Project meetings/ calls with Client for status reporting. Provide inputs on Test Metrics to Test Lead. Support in Analysis of Metric trends and implementing improvement actions as necessary. Handling changes and conducting Regression Testing Generate Test Summary Reports Co-coordinating Test team members and Development team Interacting with client-side people to solve issues and update status Actively take part in providing Analytics and Advanced Analytics Testing trainings in the company To qualify for the role, you must have BE/BTech/MCA/M.Sc Overall 2 to 9 years of experience in Testing Data warehousing / Business Intelligence solutions, minimum 2 years of experience in Testing BI & DWH technologies and Analytics applications. Experience in Bigdata testing with Hadoop/Spark framework and exposure to predictive analytics testing. Very good understanding of business intelligence concepts, architecture & building blocks in areas ETL processing, Datawarehouse, dashboards and analytics. Experience in cloud AWS/Azure infrastructure testing is desirable. Knowledge on python data processing is desirable. Testing experience in more than one of these areas- Data Quality, ETL, OLAP, Reports Good working experience with SQL server or Oracle database and proficiency with SQL scripting. Experience in backend Testing of Enterprise Applications/ Systems built on different platforms including Microsoft .Net and Sharepoint technologies Experience in ETL Testing using commercial ETL tools is desirable. Knowledge/ experience in SSRS, Spotfire (SQL Server Reporting Services) and SSIS is desirable. Experience/ Knowledge in Data Transformation Projects, database design concepts & white-box testing is desirable. Ideally, youll also have Able to contribute as an individual contributor and when required Lead a small Team Able to create Test Strategy & Test Plan for Testing BI & DWH applications/ solutions that are moderate to complex / high risk Systems Design Test Cases, Test Data and perform Test Execution & Reporting. Should be able to perform Test Management for small Projects as and when required Participate in Defect Triaging and track the defects for resolution/ conclusion Experience/ exposure to Test Automation and scripting experience in perl & shell is desirable Experience with Test Management and Defect Management tools preferably HP ALM Good communication skills (both written & verbal) Good understanding of SDLC, test process in particular Good analytical & problem solving or troubleshooting skills Good understanding of Project Life Cycle and Test Life Cycle. Exposure to CMMi and Process improvement Frameworks is a plus. Should have excellent communication skills & should be able to articulate concisely & clearly Should be ready to do an individual contributor as well as Team Leader role

Posted 1 month ago

Apply

8.0 - 12.0 years

20 - 25 Lacs

Hyderabad, Pune

Hybrid

Naukri logo

Job Title : Data Engineer Work Location : India, Pune / Hyderabad (Hybrid) Responsibilities include: Design, implement, and optimize end-to-end data pipelines for ingesting, processing, and transforming large volumes of structured and unstructured data. Develop data pipelines to extract and transform data in near real time using cloud native technologies. Implement data validation and quality checks to ensure accuracy and consistency. Monitor system performance, troubleshoot issues, and implement optimizations to enhance reliability and ePiciency. Collaborate with business users, analysts, and other stakeholders to understand data requirements and deliver tailored solutions. Document technical designs, workflows, and best practices to facilitate knowledge sharing and maintain system documentation. Provide technical guidance and support to team members and stakeholders as needed. Desirable Competencies: 8+ years of work experience Proficiency in writing complex SQL queries on MPP systems (Snowflake/Redshift) Experience in Databricks and Delta tables. Data Engineering experience with Spark/Scala/Python Experience in Microsoft Azure stack (Azure Storage Accounts, Data Factory and Databricks). Experience in Azure DevOps and CI/CD pipelines. Working knowledge of Python Feel comfortable participating in 2-week sprint development cycles. About Us Founded in 1956, Williams-Sonoma Inc. is the premier specialty retailer of high-quality products for the kitchen and home in the United States. Today, Williams-Sonoma, Inc. is one of the United States' largest e-commerce retailers with some of the best known and most beloved brands in home furnishings. Our family of brands are Williams-Sonoma, Pottery Barn, Pottery Barn Kids, Pottery Barn Teens, West Elm, Williams-Sonoma Home, Rejuvenation, GreenRow and Mark and Graham. We currently operate retail stores globally. Our products are also available to customers through our catalogues and online worldwide. Williams-Sonoma has established a technology center in Pune, India to enhance its global operations. The India Technology Center serves as a critical hub for innovation and focuses on developing cutting-edge solutions in areas such as e-commerce, supply chain optimization, and customer experience management. By integrating advanced technologies like artificial intelligence, data analytics, and machine learning, the India Technology Center plays a crucial role in accelerating Williams-Sonoma's growth and maintaining its competitive edge in the global market.

Posted 1 month ago

Apply

5.0 - 9.0 years

7 - 11 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT THE ROLE The Global Quality Analytics and Innovation team leads the digital transformation and innovation effort throughout Amgen’s Quality organization. We are at the forefront of developing and rolling out data-centric digital tools, employing automation, artificial intelligence (AI), and generative AI to drive end-to-end quality transformation. We are seeking a highly motivated and experienced Senior Data Scientist with a strong background in Generative AI, Large Language Models (LLMs), and MLOps, along with an understanding for Quality in regulated environments (e.g., GxP). This role will play a key part in designing, developing, and deploying scalable AI/ML solutions to drive innovation, efficiency, and regulatory compliance across the organization. You will collaborate with cross-functional teams, including software engineers, data engineers, business stakeholders, and quality professionals to deliver AI-driven capabilities that support strategic business objectives. The ideal candidate is an analytical thinker with excellent technical depth, communication skills, and the ability to thrive in a fast-paced, agile environment. Key Responsibilities Design, build, and deploy generative AI and LLM-based applications using frameworks such as LangChain, LlamaIndex, and others. Engineer reusable and effective prompts for LLMs like OpenAI GPT-4, Anthropic Claude, etc. Develop and maintain evaluation metrics and frameworks for prompt engineering. Collaborate with business stakeholders to identify AI/ML opportunities, ensuring alignment between technical solutions and business goals. Lead the development of MLOps pipelines for model deployment, monitoring, and lifecycle management. Conduct data quality assessments, data cleansing, and ingestion of unstructured documents into vector databases. Build retrieval algorithms for relevant data identification to support LLMs and AI applications. Ensure AI/ML development complies with GxP and other regulatory standards, fostering a strong Quality culture. Partner with global and local teams to support regulatory inspection readiness and future technological capabilities in AI. Share insights and findings with team members in an Agile (SAFe) environment. Basic Qualifications Doctorate degree OR Master’s degree and 4–6 years of experience in Software Engineering, Data Science, or ML Engineering OR Bachelor’s degree and 6–8 years of experience OR Diploma and 10–12 years of experience Preferred Qualifications Proven experience developing and deploying LLM applications. Strong foundation in ML algorithms, data science workflows, and NLP. Expertise in Python and ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Familiarity with MLOps tools (e.g., MLflow, CI/CD, version control). Experience with cloud platforms (AWS, Azure, GCP) and tools like Spark, Databricks. Understanding of RESTful APIs and frameworks like FastAPI. Experience with BI and visualization tools (e.g., Tableau, Streamlit, Dash). Knowledge of GxP compliance and experience working in regulated environments. Domain experience in healthcare, biotech, or life sciences is a plus. Strong communication skills with the ability to explain complex topics to diverse audiences. High degree of initiative, self-motivation, and ability to work in global teams.

Posted 1 month ago

Apply

8.0 - 10.0 years

11 - 15 Lacs

Bengaluru

Remote

Naukri logo

Strong knowledge of AWS cloud platform including their respective services for data storage, processing, and orchestration. Strong knowledge of Databricks platform and configurations Implementing and managing DevOps practices like CI/CD using tools Jenkins, or Terraform. Integration between S3 and Databricks Setup Databricks catalog, schema Automating deployments and managing infrastructure using tools like Terraform. Optimizing Databricks clusters, notebooks, and jobs for performance and efficiency

Posted 1 month ago

Apply

11.0 - 20.0 years

30 - 45 Lacs

Pune

Remote

Naukri logo

Technical Manager, Data Engineering As a Data Engineering Sr. Manager, you will execute on new Data Engineering products and features in collaboration with Data Science, and Business teams. Scope: This role focuses on managing, supporting delivery and strategic initiatives such as data ingestion, integration, transformation, reporting and analytics. Broadly, it will work with every functional area of data engineering, data reporting, data science teams & multiple Business partners. Essential Job Functions : The incumbent must be able to perform all of the following duties and responsibilities w ith or without a reasonable accommodation. Lead the data strategy, and own the vision and roadmap of data products to enable decision making and self-service for business and analytics teams Solid experience in emerging and traditional data stack components such as: batch and real time data ingestion, ETL, ELT, orchestration tools, on-prem and cloud DW, Python, structured, semi and unstructured databases. Collaborate closely with Data Engineering and Product team to execute on the set roadmap of data ingestion, integration, reporting and data transformation. Work cross-functionally with enterprise-wide stakeholders including (but not limited to) Analytics, Data Science, FP&A, Accounting, merchandising, pricing teams. Build a continuous monitoring process and tools to track the data management, data quality, and execute cleaning activities. Track the overall data transformation initiative and provide feedback or suggestions from a risk management perspective. Design, provide training and mentor/coach other team members. Work with stakeholders to ensure that data related business requirements for protecting sensitive data are clearly defined, communicated, and well understood and considered as part of operational prioritization and planning. Ensure the safe storage and transmission of data in line with privacy and compliance laws, data regulations, and internal company policies. This position will be in a fast-paced and entrepreneurial environment where you will be handling multiple concurrent projects while working independently and in teams. An ideal candidate will possess strong problem solving and conceptual thinking abilities in addition to communication, interpersonal and leadership skills. Supervisory Responsibility Position will manage a team comprising of Data, Integration, QA engineers and Scrum Master. Education and Experience Bachelors or Advanced degree in Information Systems, Information Management, Engineering, or related field 10+ years of experience in data management, data warehousing and handling data engineering 5+ years of experience in Manager/Lead role managing set of engineers including hiring and performance evaluations. 3+ years managing big data technology stack and restful APIs. Strong knowledge of designing data pipelines and understanding details around orchestration, integration, data transformations. Excellent communication skills, both verbal and written, and the ability to build trust-based relationships with business partners Technical knowledge in BI tools such as MicroStrategy, Looker. Proven record of handling, managing, collaborating and delivering data products on time and meeting quality standards. Competencies Demonstrate Adaptability and Desire to Learn -- Works productively in the face of ambiguity or uncertainty. Demonstrates flexibility and resilience in response to obstacles, constraints, adversity, and mistakes. Constructively and resourcefully adapts to changing needs, conditions, priorities or opportunities. Seeks out opportunities to learn from new discoveries, innovations, ways of looking at things, knowledge, and ideas. Invites and incorporates feedback, without becoming defensive. Perform Analysis -- Integrates information from a variety of sources to arrive at a broader understanding of issues (e.g., company reports plus in-store observations). Defines issues clearly despite incomplete or ambiguous information. Identifies the key issues in complex or ambiguous problems. Approaches problems or issues systematically, looking for connections, trends, and potential causes. Probes and looks past symptoms to determine the underlying causes of problems and issues. Plan and Execute -- Develops realistic plans (e.g., action steps, timelines) to accomplish objectives. Acquires and leverages resources, processes, and tools to achieve business goals. Prioritizes and balances time, actions, and projects to ensure accomplishment of results. Holds him/herself and team accountable for outcomes (e.g., achieving goals and complying with policies and procedures). Anticipates and addresses obstacles, redirecting efforts to accelerate the work or improve quality. Produce Results -- Initiates decisive, timely action to address important issues. Demonstrates a strong sense of ownership and a commitment to achieving meaningful results. Sets challenging, clear goals/targets and expectations for achieving business results. Drives initiatives/efforts to successful completion and closure. Takes personal responsibility to make decisions and take action. Good Partner -- Identifies and anticipates customer requirements, expectations, and needs. Seeks feedback from customers to identify improvement opportunities. Follows up with customers to ensure problems are solved. Continually searches for ways to improve customer service (including the removal of barriers, and providing solutions). Use Professional Judgment -- Makes logical, rational, and integrative decisions, and arrives at sound conclusions. Chooses the best alternative(s) based on a review of pros, cons, tradeoffs, timing, and probabilities. Evaluates the consequences and implications of alternatives, actions, or decisions (e.g., impact on sales, returns, customer loyalty). Makes timely decisions, balancing analysis with decisiveness. Work Environment Most tasks are performed while seated indoors at a personal computer. Limited travel to vendors, seminars and/or conferences may be required periodically throughout the year.

Posted 1 month ago

Apply

5.0 - 9.0 years

10 - 20 Lacs

Pune, Gurugram, Bengaluru

Work from Office

Naukri logo

Job Description: Expertise in GitLab Actions and Git workflows Databricks administration experience Strong scripting skills (Shell, Python, Bash) Experience with Jira integration in CI/CD workflows Familiarity with DORA metrics and performance tracking Proficient with SonarQube and JFrog Artifactory Deep understanding of branching and merging strategies Strong CI/CD and automated testing integration skills Git and Jira integration Infrastructure as Code experience (Terraform, Ansible) Exposure to cloud platform (Azure/AWS) Familiarity with monitoring/logging (Dynatrace, Grafana, Prometheus, ELK) Roles & Responsibilities Build and manage CI/CD pipelines using GitLab Actions for seamless integration and delivery. Administer Databricks workspaces, including access control, cluster management, and job orchestration. Automate infrastructure and deployment tasks using scripts (Shell, Python, Bash, etc.). Implement source control best practices, including branching, merging, and tagging. Integrate Jira with CI/CD pipelines to automate ticket updates and traceability. Track and improve DORA metrics (Deployment Frequency, Lead Time for Changes, Mean Time to Restore, Change Failure Rate). Manage code quality using SonarQube and artifact lifecycle using JFrog Artifactory. Ensure end-to-end testing is integrated into the delivery pipelines. Collaborate across Dev, QA, and Ops teams to streamline DevOps practices. Troubleshoot build and deployment issues and ensure high system reliability. Maintain up-to-date documentation and contribute to DevOps process improvements.

Posted 1 month ago

Apply

1.0 - 4.0 years

7 - 10 Lacs

Chennai

Work from Office

Naukri logo

Responsibilities: * Design, develop, optimize data pipelines using Data Bricks and Azure Data Factory. * Collaborate with cross-functional teams on project requirements and deliverables.

Posted 1 month ago

Apply

4.0 - 8.0 years

10 - 15 Lacs

Mysuru

Work from Office

Naukri logo

Identifying business problems, understand the customer issue and fix the issue. Evaluating reoccurring issues and work on for permanent solution Focus on service improvement. Troubleshooting technical issues and design flaws Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise BE / B Tech in any stream, M.Sc. (Computer Science/IT) / M.C.A, with Minimum 3-5 years of experience Azure IAAS, PASS & SAAS services expert and Handson experience on these and below all services. VM, Storage account, Load Balancer, Application Gateway, VNET, Route Table, Azure Bastion, Disaster Recovery, Backup, NSG, Azure update manager, Key Vault etc. Azure Web App, Function App, Logic App, AKS (Azure Kubernetes Service) & containerization, Docker, Event Hub, Redis Cache, Service Mess and ISTIO, App insight, Databricks, AD, DNS, Log Analytic Workspace, ARO (Azure Red Openshift) Orchestration & Containerization Docker, Kubernetes, RedHat OpenShift Security Management - Firewall Mgmt, FortiGate firewall Preferred technical and professional experience Monitoring through Cloud Native tools (CloudWatch, Cloud Trail, Azure Monitor, Activity Log, VRops and Log Insight) Server monitoring and Management (Windows, Linux, AIX AWS Linux, Ubuntu Linux) Storage Monitoring and Management (Blob, s3, EBS, Backups, recovery, Snapshots

Posted 1 month ago

Apply

6.0 - 9.0 years

27 - 42 Lacs

Kochi

Work from Office

Naukri logo

Skill: - Databricks Experience: 5 to 14 years Location: - Kochi (Walk in on 14th June) Design, develop, and maintain scalable and efficient data pipelines using Azure Databricks platform. Have work experience in Databricks Unity catalog – Collaborate with data scientists and analysts to integrate machine learning models into production pipelines. – Implement data quality checks and ensure data integrity throughout the data ingestion and transformation processes. – Optimize cluster performance and scalability to handle large volumes of data processing. – Troubleshoot and resolve issues related to data pipelines, clusters, and data processing jobs. – Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions – Conduct performance tuning and optimization for Spark jobs on Azure Databricks. – Provide technical guidance and mentorship to junior data engineers.

Posted 1 month ago

Apply

10.0 - 16.0 years

25 - 27 Lacs

Chennai

Work from Office

Naukri logo

We at Dexian India, are looking to hire a Cloud Data PM with over 10 years of hands-on experience in AWS/Azure, DWH, and ETL. The role is based in Chennai with a shift from 2.00pm to 11.00pm IST. Key qualifications we seek in candidates include: - Solid understanding of SQL and data modeling - Proficiency in DWH architecture, including EDW/DM concepts and Star/Snowflake schema - Experience in designing and building data pipelines on Azure Cloud stack - Familiarity with Azure Data Explorer, Data Factory, Data Bricks, Synapse Analytics, Azure Fabric, Azure Analysis Services, and Azure SQL Datawarehouse - Knowledge of Azure DevOps and CI/CD Pipelines - Previous experience managing scrum teams and working as a Scrum Master or Project Manager on at least 2 projects - Exposure to on-premise transactional database environments like Oracle, SQL Server, Snowflake, MySQL, and/or Postgres - Ability to lead enterprise data strategies, including data lake delivery - Proficiency in data visualization tools such as Power BI or Tableau, and statistical analysis using R or Python - Strong problem-solving skills with a track record of deriving business insights from large datasets - Excellent communication skills and the ability to provide strategic direction to technical and business teams - Prior experience in presales, RFP and RFI responses, and proposal writing is mandatory - Capability to explain complex data solutions clearly to senior management - Experience in implementing, managing, and supporting data warehouse projects or applications - Track record of leading full-cycle implementation projects related to Business Intelligence - Strong team and stakeholder management skills - Attention to detail, accuracy, and ability to meet tight deadlines - Knowledge of application development, APIs, Microservices, and Integration components Tools & Technology Experience Required: - Strong hands-on experience in SQL or PLSQL - Proficiency in Python - SSIS or Informatica (Mandatory one of the tools) - BI: Power BI, or Tableau (Mandatory one of the tools)

Posted 1 month ago

Apply

5.0 - 9.0 years

12 - 20 Lacs

Chennai

Remote

Naukri logo

USXI is Looking for Big Data Developers that will work on the collecting, storing, processing, and analyzing of huge sets of data. The Data Developers must also have exceptional analytical skills, showing fluency in the use of tools such as MySQL and strong Python, Shell, Java, PHP, and T-SQL programming skills. The candidate must also be technologically adept, demonstrating strong computer skills. Additionally, you must be capable of developing databases using SSIS packages, T-SQL, MSSQL, and MySQL scripts. The candidate will also have an ability to design, build, and maintain the businesss ETL pipeline and data warehouse. The candidate will also demonstrate expertise in data modeling and query performance tuning on SQL Server, MySQL, Redshift, Postgres or similar platforms. Key responsibilities will include: Develop and maintain data pipelines Design and implement ETL processes Hands on experience on Data Modeling Design conceptual, logical and physical data models with type 1 and type2 dimension s. Platform Expertise: Leverage Microsoft Fabric, Snowflake, and Databricks to optimize data storage, transformation, and retrieval processes. Knowledge to move the ETL code base from On-premise to Cloud Architecture Understanding data lineage and governance for different data sources Maintaining clean and consistent access to all our data sources Hands on experience to deploy the code using CI/CD pipelines Assemble large and complex data sets strategically to meet business requirements Enable business users to bring data-driven insights into their business decisions through reports and dashboards Required Qualifications: Hands on experience in big data technologies including Scala or Spark (Azure Databricks preferable), Hadoop, Hive, HDFS. Python, Java & SQL Knowledge of Microsofts Azure Cloud Experience and commitment to development and testing best practices. DevOps experience with continuous integration/delivery best-practices, technologies and tools. Experienced deploying Azure SQL Database, Azure Data Factory and well-acquainted with other Azure services including Azure Data Lake and Azure ML Experience implementing REST API calls and authentication Experienced working with agile project management methodologies Computer Science Degree/Diploma Microsoft Certified: DP203 - Azure Data Engineer Associate

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Ahmedabad

Work from Office

Naukri logo

Role & responsibilities Senior Data Engineer Job Description GRUBBRR is seeking a mid/senior-level data engineer to help build our next-generation analytical and big data solutions. We strive to build Cloud-native, consumer-first, UX-friendly kiosks and online applications across a variety of verticals supporting enterprise clients and small businesses. Behind our consumer applications, we integrate and interact with a deep-stack of payment, loyalty, and POS systems. In addition, we also provide actionable insights to enable our customers to make informed decisions. Our challenge and goal is to provide a frictionless experience for our end-consumers and easy-to-use, smart management capabilities for our customers to maximize their ROIs. Responsibilities: Develop and maintain data pipelines Ensure data quality and accuracy Design, develop and maintain large, complex sets of data that meet non-functional and functional business requirements Build required infrastructure for optimal extraction, transformation and loading of data from various data sources using cloud technologies Build analytical tools to utilize the data pipelines Skills: Solid experience with SQL & NoSQL Strong Data modeling skills for data lake, data warehouse, data marts including dimensional modeling and star schemas Proficient with Azure Data Factory data integration technology Knowledge of Hadoop or similar Big Data technology Knowledge of Apache Kafka, Spark, Hive or equivalent Knowledge of Azure or AWS analytics technologies Qualifications: BS in Computer Science, Applied Mathematics or related fields (MS preferred) At least 8 years of experience working with OLAPs Microsoft Azure or AWS Data engineer certification a plus

Posted 1 month ago

Apply

5.0 - 9.0 years

15 - 30 Lacs

Hyderabad

Hybrid

Naukri logo

Hi! Greetings of the day!! We have openings for one of our product based company. Location : Hyderabad Notice Period: Only Immediate - 30 Days Work Mode - Hybrid Key Purpose Statement Core mission The core purpose of a Senior Data Engineer will play a key role in designing, building, and optimizing our data infrastructure and pipelines. This individual will leverage their deep expertise in Azure Synapse , Databricks cloud platforms, and Python programming to deliver high-quality data solutions. RESPONSIBILITIES Data Infrastructure and Pipeline Development: - Develop and maintain complex ETL/ELT pipelines using Databricks and Azure Synapse. - Optimize data pipelines for performance, scalability, and cost-efficiency. - Implement best practices for data governance, quality, and security. Cloud Platform Management: - Design and manage cloud-based data infrastructure on platforms such as Azure - Utilize cloud-native tools and services to enhance data processing and storage capabilities. - understanding and designing CI/CD pipelines for data engineering projects. Programming: - Develop and maintain high-quality, reusable Code on Databricks, and Synapse environment for data processing and automation. - Collaborate with data scientists and analysts to design solutions into data workflows. - Conduct code reviews and mentor junior engineers in Python , PySpark & SQL environments best practices. If interested, please share resume to aparna.ch@v3staffing.in

Posted 1 month ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :

Posted 1 month ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be proficient in Databricks Understands where to obtain information needed to make the appropriate decisions Demonstrates ability to break down a problem to manageable pieces and implement effective timely solutions Identifies the problem versus the symptoms Manages problems that require involvement of others to solve Reaches sound decisions quickly Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Roles Responsibilities Provides innovative and cost effective solution using databricks Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Learn adapt quickly to new Technologies as per the business need Develop a team of Operations Excellence building tools and capabilities that the Development teams leverage to maintain high levels of performance scalability security and availability Skills The Candidate must have 710 yrs of experience in databricks delta lake Hands on experience on Azure Experience on Python scripting Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Knowledge of Azure architecture and design Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :

Posted 1 month ago

Apply

5.0 - 10.0 years

13 - 23 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Role & responsibilities Hands-on expertise in provisioning and configuring Azure components Including Data, App Services, Azure Kubernetes Service, Azure AI/ML, and other components. Expert developer of Terraform scripts and ARM templates. Good at GitHub and Azure DevOps for code and infrastructure deployments. Good at API Gateway (Kong) for managing and registering APIs Understand internals of Azure Databricks and Unity Catalog. Good at troubleshooting of Azure Platform issues. Nice to have Snowflake provisioning and configuration skills. Must be an excellent communicator and collaborator. Preferred candidate profile

Posted 1 month ago

Apply

5.0 - 10.0 years

12 - 16 Lacs

Pune

Work from Office

Naukri logo

COMPANY OVERVIEW Domos AI and Data Products Platform lets people channel AI and data into innovative uses that deliver a measurable impact. Anyone can use Domo to prepare, analyze, visualize, automate, and build data products that are amplified by AI. POSITION SUMMARY Working as a member of Domo s Client Services team, the Senior Technical Consultant will be focused on the implementation of fault tolerant, highly scalable solutions. The successful candidate will have a minimum of 5 years working hands-on with data. This individual will join an enthusiastic, fast-paced and dynamic team at Domo. A successful candidate will have demonstrated sustained exceptional performance, innovation, creativity, insight, good judgment. KEY RESPONSIBILITIES Partner with customers, business users, technical teams to understand the data needs and deliver impactful solutions; Develop strategies for data acquisitions and integration of the new data into Domos Data Engine; Map source system data to Domos data architecture and define integration strategies; Lead database analysis, design, and build effort, if required; Design scalable and efficient data models for the data warehouse or data mart (data structure, storage, and integration); Implement best practices for data ingestion, transformation and semantic modelling; Aggregate, transform and prepare large data sets for use within Domo solutions; Provide guidance on how to design and optimizes complex SQL queries; Provide consultation and mentoring to customers on best practices and skills to drive greater self-sufficiency; Ensure data quality and perform validation across pipelines and reports; Write Python scripts to automate governance processes; Ability to create workflows in DOMO to automate business processes; Build custom Domo applications or custom bricks to support unique client use cases; Develop Agent Catalysts to deliver generative AI-powered insights within Domo, enabling intelligent data exploration, narrative generation, and proactive decision support through embedded AI features; Capable of thoroughly reviewing and documenting existing data pipelines, and guiding customers through them to ensure a seamless transition and operational understanding. JOB REQUIREMENTS 5+ years of experience supporting business intelligence systems in a BI or ETL Developer role; Expert SQL skills required; Expertise with Windows and Linux environments; Expertise with at least one of the following database technologies and familiarity with the others: relational, columnar and NoSQL (i.e. MySQL, Oracle, MSSQL, Vertica, MongoDB); Understanding of data modelling skills (i.e. conceptual, logical and physical model design - with both traditional 3rd normal form as well as dimensional modelling, such as star and snowflake); Experience dealing with large data sets; Goal oriented with strong attention to detail; Proven experience in effectively partnering with business teams to deliver their goals and outcomes; Bachelors Degree in in Information Systems, Statistics, Computer Science or related field preferred OR equivalent professional experience; Excellent problem-solving skills and creativity; Ability to think outside the box; Ability to learn and adapt quickly to varied requirements; Thrive in a fast-paced environment. NICE TO HAVE Experience working with APIs; Experience working with Web Technologies (JavaScript, Html, CSS); Experience with scripting technologies (Java, Python, R, etc.); Experience working with Snowflake, Data Bricks or Big Query is a plus; Experience defining scope and requirements for projects; Excellent oral and written communication skills, and comfort presenting to everyone from entry-level employees to senior vice presidents; Experience with statistical methodologies; Experience with a wide variety of business data (Marketing, Finance, Operations, etc); Experience with Large ERP systems (SAP, Oracle JD Edwards, Microsoft Dynamics, NetSuite, etc); Understanding of Data Science, Data Modelling and analytics. LOCATION: Pune, Maharashtra, India INDIA BENEFITS & PERKS Medical insurance provided Maternity and paternity leave policies Baby bucks: a cash allowance to spend on anything for every newborn or child adopted Haute Mama : cash allowance for maternity wardrobe benefit (only for women employees) Annual leave of 18 days + 10 holidays + 12 sick leaves Sodexo Meal Pass Health and Wellness Benefit One-time Technology Benefit: cash allowance towards the purchase of a tablet or smartwatch Corporate National Pension Scheme Employee Assistance Programme (EAP) Marriage leaves up to 3 days Bereavement leaves up to 5 days Domo is an equal opportunity employer. #LI-PD1 #LI-Onsite

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies