Jobs
Interviews

6093 Scala Jobs - Page 35

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 12.0 years

0 Lacs

karnataka

On-site

As a Big Data Lead with 7-12 years of experience, you will be responsible for software development using multiple computing languages. Your role will involve working on distributed data processing systems and applications, specifically in Business Intelligence/Data Warehouse (BIDW) programs. Additionally, you should have previous experience in development through testing, preferably on the J2EE stack. Your knowledge and understanding of best practices and concepts in Data Warehouse Applications will be crucial to your success in this role. You should possess a strong foundation in distributed systems and computing systems, with hands-on engineering skills. Hands-on experience with technologies such as Spark, Scala, Kafka, Hadoop, Hbase, Pig, and Hive is required. An understanding of NoSQL data stores, data modeling, and data management is essential for this position. Good interpersonal communication skills, along with excellent oral and written communication and analytical skills, are necessary for effective collaboration within the team. Experience with Data Lake implementation as an alternative to Data Warehouse is preferred. You should have hands-on experience with Data frames using Spark SQL and proficiency in SQL. A minimum of 2 end-to-end implementations in either Data Warehouse or Data Lake is required for this role as a Big Data Lead.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

ahmedabad, gujarat

On-site

You are a highly skilled and experienced Solution Architect specializing in Data & AI, with over 8 years of experience. In this role, you will lead and drive the data-driven transformation within the organization. Your main responsibility is to design and implement cutting-edge AI and data solutions that align with the business objectives. Collaborating closely with cross-functional teams, you will create scalable, high-performance architectures utilizing modern technologies in data engineering, machine learning, and cloud computing. Your key responsibilities include architecting and designing end-to-end data and AI solutions to address business challenges and optimize decision-making. You will define and implement best practices for data architecture, data governance, and AI model deployment. Collaborating with data engineers, data scientists, and business stakeholders, you will deliver scalable and high-impact AI-driven applications. Additionally, you will lead the integration of AI models with enterprise applications, ensuring seamless deployment and operational efficiency. It is also part of your role to evaluate and recommend the latest technologies in data platforms, AI frameworks, and cloud-based analytics solutions while ensuring data security, compliance, and ethical AI implementation. Guiding teams in adopting advanced analytics, AI, and machine learning models for predictive insights and automation is also a crucial aspect. Your role requires driving innovation by identifying new opportunities for AI and data-driven improvements within the organization. To excel in this position, you must possess over 8 years of experience in designing and implementing data and AI solutions. Strong expertise in cloud platforms such as AWS, Azure, or Google Cloud is essential. Hands-on experience with big data technologies like Spark, Databricks, Snowflake, etc., is required. Proficiency in TensorFlow, PyTorch, Scikit-learn, etc., is a must. A deep understanding of data modeling, ETL processes, and data governance frameworks is necessary. Experience in MLOps, model deployment, and automation is expected. Proficiency in Generative AI frameworks and strong programming skills in Python, SQL, and Java/Scala (preferred) are essential. Familiarity with containerization and orchestration (Docker, Kubernetes) is a plus. Excellent problem-solving skills and the ability to work in a fast-paced environment are crucial. Strong communication and leadership skills, with the ability to drive technical conversations, are highly valuable. Preferred qualifications for this role include certifications in cloud architecture, data engineering, or AI/ML, experience with generative AI, a background in developing AI-driven analytics solutions for enterprises, experience with Graph RAG, Building AI Agents, Multi-Agent systems, and additional certifications in AI/GenAI. Proven leadership skills are expected in this position. This role offers various perks such as flexible timings, 5 days working schedule, a healthy environment, celebrations, opportunities for learning and growth, building a community, and medical insurance benefits.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

As an experienced Software/Data Engineer with a passion for creating meaningful solutions, you will be joining a global team of innovators at a Siemens company. In this role, you will be responsible for developing data integration solutions using Java, Scala, and/or Python, with a focus on data and Business Intelligence (BI). Your primary responsibilities will include building data pipelines, data transformation, and data modeling to support various integration methods and information delivery techniques. To excel in this position, you should have a Bachelor's degree in an Engineering or Science discipline or equivalent experience, along with at least 5 years of software/data engineering experience. Additionally, you should have a minimum of 3 years of experience in a data and BI focused role. Proficiency in data integration development using languages such as Python, PySpark, and SparkSQL, as well as experience with relational databases and SQL optimization, are essential for this role. Experience with AWS-based data services technologies (e.g., Glue, RDS, Athena) and Snowflake CDW, along with familiarity with BI tools like PowerBI, will be beneficial. Your willingness to experiment with new technologies and adapt to agile development practices will be key to your success in this role. Join us in creating a brighter future where smarter infrastructure protects the environment and connects us all. Our culture is built on collaboration, support, and a commitment to helping each other grow both personally and professionally. If you are looking to make a positive impact and contribute to a more sustainable world, we invite you to explore how far your passion can take you with us.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

kolkata, west bengal

On-site

You must have knowledge in Azure Datalake, Azure function, Azure Databricks, Azure Data Factory, and PostgreSQL. Working knowledge in Azure DevOps and Git flow would be an added advantage. Alternatively, you should have working knowledge in AWS Kinesis, AWS EMR, AWS Glue, AWS RDS, AWS Athena, and AWS RedShift. Demonstrable expertise in working with timeseries data is essential. Experience in delivering data engineering/data science projects in Industry 4.0 is an added advantage. Knowledge of Palantir is required. You must possess strong problem-solving skills with a focus on sustainable and reusable development. Proficiency in using statistical computer languages like Python/PySpark, Pandas, Numpy, seaborn/matplotlib is necessary. Knowledge in Streamlit.io is a plus. Familiarity with Scala, GoLang, Java, and big data tools such as Hadoop, Spark, Kafka is beneficial. Experience with relational databases like Microsoft SQL Server, MySQL, PostGreSQL, Oracle, and NoSQL databases including Hadoop, Cassandra, MongoDB is expected. Proficiency in data pipeline and workflow management tools like Azkaban, Luigi, Airflow is required. Experience in building and optimizing big data pipelines, architectures, and data sets is crucial. You should possess strong analytical skills related to working with unstructured datasets. Provide innovative solutions to data engineering problems, document technology choices, and integration patterns. Apply best practices for project delivery with clean code. Demonstrate innovation and proactiveness in meeting project requirements. Reporting to: Director- Intelligent Insights and Data Strategy Travel: Must be willing to be deployed at client locations worldwide for long and short terms, flexible for shorter durations within India and abroad.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

thrissur, kerala

On-site

As a Data Engineer at WAC, you will be responsible for ensuring the availability, reliability, and scalability of the data infrastructure. Your role will involve collaborating closely with cross-functional teams to support data-driven initiatives, enabling data scientists, analysts, and business stakeholders to access high-quality data for critical decision-making. You will be involved in designing, developing, and maintaining efficient ETL processes and data pipelines to collect, process, and store data from various sources. Additionally, you will create and manage data warehouses and data lakes, optimizing storage and query performance for both structured and unstructured data. Implementing data quality checks, validation processes, and error handling will be crucial in ensuring data accuracy and consistency. Administering and optimizing relational and NoSQL databases to ensure data integrity and high availability will also be part of your responsibilities. Identifying and addressing performance bottlenecks in data pipelines and databases to improve overall system efficiency is another key aspect of the role. Furthermore, implementing data security measures and access controls to protect sensitive data assets will be essential. Collaboration with data scientists, analysts, and stakeholders to understand their data needs and provide support for analytics and reporting projects is an integral part of the job. Maintaining clear and comprehensive documentation for data processes, pipelines, and infrastructure will also be required. Monitoring data pipelines and databases, proactively identifying issues, and troubleshooting and resolving data-related problems in a timely manner are vital aspects of the position. To qualify for this role, you should have a Bachelor's degree in Computer Science, Information Technology, or a related field, with at least 4 years of experience in data engineering roles. Proficiency in programming languages such as Python, Java, or Scala is necessary. Experience with data warehousing solutions and database systems, as well as a strong knowledge of ETL processes, data integration, and data modeling, are also required. Familiarity with data orchestration and workflow management tools, an understanding of data security best practices and data governance principles, excellent problem-solving skills, and the ability to work in a fast-paced, collaborative environment are essential. Strong communication skills and the ability to explain complex technical concepts to non-technical team members are also important for this role. Thank you for your interest in joining the team at Webandcrafts. We look forward to learning more about your candidacy through this application.,

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Morgan Stanley Model Risk - Pricing Models - XVA/IMM Models - Associate Profile Description We’re seeking someone to join our team as a [Associate] to [Model Risk - Market Risk - XVA/ IMM Pricing models]. Firm Risk Management In the Firm Risk Management division, we advise businesses across the Firm on risk mitigation strategies, develop tools to analyze and monitor risks and lead key regulatory initiatives. Company Profile Morgan Stanley is an industry leader in financial services, known for mobilizing capital to help governments, corporations, institutions, and individuals around the world achieve their financial goals. Since 1935, Morgan Stanley is known as a global leader in financial services, always evolving and innovating to better serve our clients and our communities in more than 40 countries around the world. What You’ll Do In The Role The primary responsibilities of the role include, but are not limited to the following:- Provide independent review and validation compliant with MRM policies and procedures, regulatory guidance and industry leading practices, including evaluating conceptual soundness, quality of model / tool methodology, model / tool limitations, data quality, and on-going monitoring of model / tool performance Take initiatives and responsibility of end-to-end delivery of a stream of Model and Tool Validation and related Risk Management deliverables Write Model and Tool Review findings in validation documents that could be used for presentations both internally (model and tool developers, business unit managers, Audit, various global Committees) as well as externally (Regulators) Verbally communicate results and debate issues, challenges and methodologies with internal audiences including senior management Represent MRM team in interactions with regulatory and audit agencies as and when required Follow financial markets & business trends on a frequent basis to enhance the quality of Model and Tool Validation and related Risk Management deliverables Experience What you’ll bring to the role: Masters or Doctorate degree in a quantitative discipline such as Statistics, Mathematics, Physics, Computer Science or Engineering is essential Experience in a Quant role in validation of Models / Tools, in developments of Models / Tools or in a technical role in Financial institutions e.g. Developer, is essential Strong written & verbal communication skills including debating different viewpoints and making formal presentations of complex topics to a wider audience is preferred 5+ years of relevant work experience in a Model / Tool Validation role in a bank or financial institution Proficient programmer in Python ; knowledge of other programming languages like R, Scala, MATLAB etc. is preferred Willingness to learn new and complex topics and adapt oneself (continuous learning) is preferred Working knowledge of statistical techniques, quantitative finance and programming is essential; good understanding of various complex financial instruments is preferred Knowledge of popular machine learning techniques is preferred Relevant professional certifications like CQF, CFA or progress made towards it are preferred Desire to work in a dynamic, team-oriented, fast-paced environment focusing on challenging tasks mixing fundamental, quantitative, and market-oriented knowledge and skills is essential What You Can Expect From Morgan Stanley We are committed to maintaining the first-class service and high standard of excellence that have defined Morgan Stanley for over 89 years. Our values - putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back - aren’t just beliefs, they guide the decisions we make every day to do what's best for our clients, communities and more than 80,000 employees in 1,200 offices across 42 countries. At Morgan Stanley, you’ll find an opportunity to work alongside the best and the brightest, in an environment where you are supported and empowered. Our teams are relentless collaborators and creative thinkers, fueled by their diverse backgrounds and experiences. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. There’s also ample opportunity to move about the business for those who show passion and grit in their work. To learn more about our offices across the globe, please copy and paste https://www.morganstanley.com/about-us/global-offices into your browser. Morgan Stanley is an equal opportunities employer. We work to provide a supportive and inclusive environment where all individuals can maximize their full potential. Our skilled and creative workforce is comprised of individuals drawn from a broad cross section of the global communities in which we operate and who reflect a variety of backgrounds, talents, perspectives, and experiences. Our strong commitment to a culture of inclusion is evident through our constant focus on recruiting, developing, and advancing individuals based on their skills and talents.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

6 - 11 Lacs

Bengaluru

Work from Office

At Moody's, we unite the brightest minds to turn todays risks into tomorrows opportunities We do this by striving to create an inclusive environment where everyone feels welcome to be who they are?with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity Skills & Competencies Must have 6+ years of professional experience in software development, with a strong track record of delivering scalable, impactful solutions Must have Proficiency in at least one of the following: Java, Scala, C#, C++, or Python Must have Expertise in building production-ready RESTful APIs and microservices Must have Experience with both relational (Postgres, MySQL, SQL Server, Oracle) and NoSQL databases at scale Must have Strong knowledge of cloud-native application development using Docker and Kubernetes, and familiarity with on-premise deployments Good to have Hands-on experience in system performance tuning, multi-threading, and memory management Good to have Familiarity with AI/ML, high-performance computing, or data analytics in risk modeling or insurance-related domains Education Bachelors or Masters degree in Computer Science, Engineering, or a related field, or equivalent practical experience Responsibilities Deliver high-quality, scalable, and maintainable software solutions that align with product goals and customer impact Lead and independently execute critical components of the software development lifecycle, from design through deployment Collaborate with cross-functional teams, including product managers and designers, to define and implement solutions that address real-world challenges Participate in technical planning and contribute to roadmap development, process optimization, and performance enhancements Drive best practices through code reviews, technical discussions, and mentorship of junior engineers Investigate and resolve complex technical issues to ensure system reliability, scalability, and performance Champion continuous improvement in software quality, team processes, and technical architecture Effectively document and communicate technical decisions and solutions to stakeholders across teams About The Team Join Moodys Insurance Solutions, where we are shaping the future of global risk analysis for the multi-trillion-dollar P&C insurance industry Our team builds cutting-edge software, high-performance models, and advanced analytics to address todays most complex challenges?from climate change to cyber threats You'll work in a collaborative, inclusive environment that values curiosity, innovation, and shared success Moodys is an equal opportunity employer All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, sexual orientation, gender expression, gender identity or any other characteristic protected by law Candidates for Moody's Corporation may be asked to disclose securities holdings pursuant to Moodys Policy for Securities Trading and the requirements of the position Employment is contingent upon compliance with the Policy, including remediation of positions in those holdings as necessary For more information on the Securities Trading Program, please refer to the STP Quick Reference guide on ComplianceNet Please note: STP categories are assigned by the hiring teams and are subject to change over the course of an employees tenure with Moodys

Posted 2 weeks ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Agra

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 2 weeks ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Vadodara

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 2 weeks ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Preferred Education Master's Degree Required Technical And Professional Expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred Technical And Professional Experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste This position participates in the support of batch and real-time data pipelines utilizing various data analytics processing frameworks in support of data science practices for Marketing and Finance business units. This position supports the integration of data from various data sources, as well as performs extract, transform, load (ETL) data conversions, and facilitates data cleansing and enrichment. This position performs full systems life cycle management activities, such as analysis, technical requirements, design, coding, testing, implementation of systems and applications software. This position participates and contributes to synthesizing disparate data sources to support reusable and reproducible data assets. Responsibilities Supervises and supports data engineering projects and builds solutions by leveraging a strong foundational knowledge in software/application development. Develops and delivers data engineering documentation. Gathers requirements, defines the scope, and performs the integration of data for data engineering projects. Recommends analytic reporting products/tools and supports the adoption of emerging technology. Performs data engineering maintenance and support. Provides the implementation strategy and executes backup, recovery, and technology solutions to perform analysis. Performs ETL tool capabilities with the ability to pull data from various sources and perform a load of the transformed data into a database or business intelligence platform. Required Qualifications Codes using programming language used for statistical analysis and modeling such as Python/Java/Scala/C# Strong understanding of database systems and data warehousing solutions. Strong understanding of the data interconnections between organizations’ operational and business functions. Strong understanding of the data life cycle stages - data collection, transformation, analysis, storing the data securely, providing data accessibility Strong understanding of the data environment to ensure that it can scale for the following demands: Throughput of data, increasing data pipeline throughput, analyzing large amounts of data, Real-time predictions, insights and customer feedback, data security, data regulations, and compliance. Strong knowledge of data structures, as well as data filtering and data optimization. Strong understanding of analytic reporting technologies and environments (e.g., PBI, Looker, Qlik, etc.) Strong understanding of a cloud services platform (e.g., GCP, or AZURE, or AWS) and all the data life cycle stages. Azure Preferred. Understanding of distributed systems and the underlying business problem being addressed, as well as guides team members on how their work will assist by performing data analysis and presenting findings to the stakeholders. Bachelor’s degree in MIS, mathematics, statistics, or computer science, international equivalent, or equivalent job experience. Required Skills 3 years of experience with Databricks Other required experience includes: SSIS/SSAS, Apache Spark, Python, R and SQL, SQL Server Preferred Skills Scala, DeltaLake Unity Catalog, Azure Logic Apps, Cloud Services Platform (e.g., GCP, or AZURE, or AWS) Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

35 - 60 Lacs

Bengaluru

Work from Office

Job Summary As a Software Engineer you will work as part of a team responsible for actively participating in driving product development and strategy. In addition, you will participate in activities that include testing and debugging of operating systems that run NetApp storage applications. As part of the Research and Development function, the overall focus of the group is on competitive market and customer requirements, supportability, technology advances, product quality, product cost and time-to-market. Software engineers focus on enhancements to existing products as well as new product development. Job Requirements • Responsible for unstructured tasks and the issues addressed are less defined requiring new perspectives, creative approaches and with mor interdependencies. • Apply attained experiences and knowledge in solving problems that are complex in scope requiring in-depth evaluation. • Work effectively with staff to Sr-Director level employees within the function, across functions and with external parties. • Limited supervision and direction is provided, as this individual can operate, drive results and set priorities independently. Technical Skills: • Excellent Problem solver, proficient coder and a designer. • Should have good experience in Scala, Java • Proficient in Docker, Microservices, knowledge on Saas, AWS is added advantage. • Strong in Data Structure and algorithms. • Expertise in REST API design and implementation. • Should be able to write code. Should be able to lead in certain areas of the product. • Should be able to talk to Architect, understand the architecture, design the system and translate into code. • Should be able to guide a team of 1-2 junior engineers. Education • A minimum of 8 years of experience is required. 8 to 11 years of experience is preferred • B.E/B. Tech or M.S in Computer Science or related technical field.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

35 - 50 Lacs

Hyderabad

Work from Office

Job Description: Spark, Java Strong SQL writing skills, data discovery, data profiling, Data exploration, Data wrangling skills Kafka, AWS s3, lake formation, Athena, glue, Autosys or similar tools, FastAPI (secondary) Strong SQL skills to support data analysis and imbedded business logic in SQL, data profiling and gap assessment Collaborate with development and business SMEs within technology to understand data requirements, perform data analysis to support and Validate business logic, data integrity and data quality rules within a centralized data platform Experience working within the banking/financial services industry with solid understanding of financial products and business processes

Posted 2 weeks ago

Apply

5.0 - 10.0 years

9 - 13 Lacs

Noida

Work from Office

We are looking for a skilled AI/ML Ops Engineer to join our team to bridge the gap between data science and production systems. You will be responsible for deploying, monitoring, and maintaining machine learning models and data pipelines at scale. This role involves close collaboration with data scientists, engineers, and DevOps to ensure that ML solutions are robust, scalable, and reliable. Key Responsibilities: Design and implement ML pipelines for model training, validation, testing, and deployment. Automate ML workflows using tools such as MLflow, Kubeflow, Airflow, or similar. Deploy machine learning models to production environments (cloud). Monitor model performance, drift, and data quality in production. Collaborate with data scientists to improve model robustness and deployment readiness. Ensure CI/CD practices for ML models using tools like Jenkins, GitHub Actions, or GitLab CI. Optimize compute resources and manage model versioning, reproducibility, and rollback strategies. Work with cloud platforms AWS and containerization tools like Kubernetes (AKS). Ensure compliance with data privacy and security standards (e.g., GDPR, HIPAA). Required Qualifications: Bachelors or Masters degree in Computer Science, Engineering, or related field. 5+ years of experience in DevOps, Data Engineering, or ML Engineering roles. Strong programming skills in Python; familiarity with R, Scala, or Java is a plus. Experience with automating ML workflows using tools such as MLflow, Kubeflow, Airflow, or similar Experience with ML frameworks like TensorFlow, PyTorch, Scikit-learn, or XGBoost. Experience with ML model monitoring and alerting frameworks (e.g., Evidently, Prometheus, Grafana). Familiarity with data orchestration and ETL/ELT tools (Airflow, dbt, Prefect). Preferred Qualifications: Experience with large-scale data systems (Spark, Hadoop). Knowledge of feature stores (Feast, Tecton). Experience with streaming data (Kafka, Flink). Experience working in regulated environments (finance, healthcare, etc.). Certifications in cloud platforms or ML tools. Soft Skills: Strong problem-solving and debugging skills. Excellent communication and collaboration with cross-functional teams. Adaptable and eager to learn new technologies. Mandatory Competencies Data Science and Machine Learning - Data Science and Machine Learning - AI/ML Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Development Tools and Management - Development Tools and Management - CI/CD Data Science and Machine Learning - Data Science and Machine Learning - Gen AI (LLM, Agentic AI, Gen AI enable tools like Github Copilot) Big Data - Big Data - Hadoop Big Data - Big Data - SPARK Data Science and Machine Learning - Data Science and Machine Learning - Python Beh - Communication and collaboration

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Mahesana, Gujarat, India

Remote

bEdge Tech Services ( www.bedgetechinc.com ) is urgently seeking a passionate and experienced Data Engineer to join our dynamic team in Ahmedabad, Gujarat! Are you ready to shape the future of tech talent? We're building a dedicated team to develop training materials, conduct live sessions, and mentor US-based clients and students. This is a unique opportunity to blend your data engineering expertise with your passion for teaching and knowledge sharing. This is a full-time, Work From Office position based in Ahmedabad. No remote or hybrid options are available. Location: Ahmedabad, Gujarat, India (Work From Office ONLY) Experience: 2 - 4 years Salary: ₹35,000 - ₹40,000 per month + Performance Incentives About the Role: As a key member of our US Client/Student Development team, you'll be instrumental in empowering the next generation of data engineering professionals. Your primary focus will be on: Content Creation: Designing and developing comprehensive and engaging training materials, modules, and exercises covering various aspects of data pipeline design, ETL, and data warehousing. Live Session Delivery: Conducting interactive live online sessions, workshops, and webinars, demonstrating complex data engineering concepts and practical implementations. Mentorship: Providing guidance, support, and constructive feedback to students/clients on their data engineering projects, helping them design robust data solutions and troubleshoot issues. Curriculum Development: Collaborating with the team to continuously refine and update data engineering course curricula based on industry trends, new technologies, and student feedback. Key Responsibilities: Develop high-quality training modules on data pipeline design, ETL/ELT processes, data warehousing concepts (dimensional modeling, Kimball/Inmon), and data lake architectures. Prepare and deliver engaging live sessions on setting up, managing, and optimizing data infrastructure on cloud platforms (AWS, Azure, GCP). Guide and mentor students in building scalable and reliable data ingestion, processing, and storage solutions using various tools and technologies. Explain best practices for data quality, data governance, data security, and performance optimization in data engineering. Create practical assignments, hands-on labs, and capstone projects that simulate real-world data engineering challenges. Stay updated with the latest advancements in big data technologies, cloud data services, and data engineering best practices. Required Skills & Experience: Experience: 2 to 4 years of hands-on industry experience as a Data Engineer or in a similar role focused on data infrastructure. Communication: Excellent and compulsory English communication skills (both written and verbal) – ability to articulate complex technical concepts clearly and concisely to diverse audiences is paramount. Passion for Teaching: A strong desire and aptitude for training, mentoring, and guiding aspiring data engineering professionals. Analytical Skills: Strong problem-solving abilities, logical thinking, and a structured approach to data infrastructure design. Work Ethic: Highly motivated, proactive, and able to work independently as well as collaboratively in a fast-paced environment. Location Commitment: Must be willing to work from our Ahmedabad office full-time . Required Technical Skills: Strong programming skills in Python (or Java/Scala) for data processing and scripting. Expertise in SQL and experience with relational database systems (e.g., PostgreSQL, MySQL, SQL Server) and/or NoSQL databases (e.g., MongoDB, Cassandra). Proven experience with ETL/ELT tools and frameworks (e.g., Apache Airflow, Talend, Fivetran, Data Factory). Hands-on experience with at least one major cloud platform (AWS, Azure, or GCP) and its data services (e.g., S3, Redshift, EMR, Glue, Data Lake, Data Factory, BigQuery, Dataproc). Familiarity with data warehousing concepts and data modeling techniques (Star Schema, Snowflake Schema). Experience with big data technologies (e.g., Apache Spark, Hadoop) is a significant advantage. Understanding of data governance, data security, and data lineage principles. What We Offer: A competitive salary and attractive performance-based incentives . The unique opportunity to directly impact the careers of aspiring tech professionals. A collaborative, innovative, and supportive work environment. Continuous learning and professional growth opportunities in a niche domain. Be a part of a rapidly growing team focused on global client engagement.

Posted 2 weeks ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Jaipur

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 2 weeks ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Faridabad

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 2 weeks ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Nagpur

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Strong programming skills in SQL, Python and PySpark for data processing and automation. Experience with Databricks and Snowflake (preferred) for building and maintaining data pipelines. Understanding of Machine Learning and AI techniques, especially for data quality and anomaly detection. Experience with cloud platforms such as Azure and AWS and familiarity with Azure Web Apps Knowledge of Data Quality and Data Governance concepts (Preferred) Nice to have: Power BI dashboard development experience.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

15 - 25 Lacs

Pune, Chennai, Bengaluru

Hybrid

Databricks Developer Primary Skill : Azure data factory, Azure databricks Secondary Skill: SQL,,Sqoop,Hadoop Experience: 5 to 9 years Location: Chennai, Bangalore ,Pune, Coimbatore Requirements: Cloud certified in one of these categories Azure Data Engineer Azure Data Factory , Azure Data bricks Spark (PySpark or scala), SQL, DATA Ingestion, Curation Semantic Modelling/ Optimization of data model to work within Rahona Experience in Azure ingestion from on-prem source, e.g. mainframe, SQL server, Oracle. Experience in Sqoop / Hadoop Microsoft Excel (for metadata files with requirements for ingestion) Any other certificate in Azure/AWS/GCP and data engineering hands-on experience in cloud Strong Programming skills with at least one of Python, Scala, or Java

Posted 2 weeks ago

Apply

6.0 - 11.0 years

14 - 19 Lacs

Bengaluru

Remote

Role: Azure Specialist-CDM Smith Location:Bangalore Mode: Remote Education and Work Experience Requirements: Key Responsibilities: Databricks Platform: Act as a subject matter expert for the Databricks platform within the Digital Capital team, provide technical guidance, best practices, and innovative solutions. Databricks Workflows and Orchestration: Design and implement complex data pipelines using Azure Data Factory or Qlik replicate. End-to-End Data Pipeline Development: Design, develop, and implement highly scalable and efficient ETL/ELT processes using Databricks notebooks (Python/Spark or SQL) and other Databricks-native tools. Delta Lake Expertise: Utilize Delta Lake for building reliable data lake architecture, implementing ACID transactions, schema enforcement, time travel, and optimizing data storage for performance. Spark Optimization: Optimize Spark jobs and queries for performance and cost efficiency within the Databricks environment. Demonstrate a deep understanding of Spark architecture, partitioning, caching, and shuffle operations. Data Governance and Security: Implement and enforce data governance policies, access controls, and security measures within the Databricks environment using Unity Catalog and other Databricks security features. Collaborative Development: Work closely with data scientists, data analysts, and business stakeholders to understand data requirements and translate them into Databricks based data solutions. Monitoring and Troubleshooting: Establish and maintain monitoring, alerting, and logging for Databricks jobs and clusters, proactively identifying and resolving data pipeline issues. Code Quality and Best Practices: Champion best practices for Databricks development, including version control (Git), code reviews, testing frameworks, and documentation. Performance Tuning: Continuously identify and implement performance improvements for existing Databricks data pipelines and data models. Cloud Integration: Experience integrating Databricks with other cloud services (e.g., Azure Data Lake Storage Gen2, Azure Synapse Analytics, Azure Key Vault) for a seamless data ecosystem. Traditional Data Warehousing & SQL: Design, develop, and maintain schemas and ETL processes for traditional enterprise data warehouses. Demonstrate expert-level proficiency in SQL for complex data manipulation, querying, and optimization within relational database systems. Mandatory Skills: Experience in Databricks and Databricks Workflows and Orchestration Python: Hands-on experience in automation and scripting. Azure: Strong knowledge of Data Lakes, Data Warehouses, and cloud architecture. Solution Architecture: Experience in designing web applications and data engineering solutions. DevOps Basics: Familiarity with Jenkins and CI/CD pipelines. Communication: Excellent verbal and written communication skills. Fast Learner: Ability to quickly grasp new technologies and adapt to changing requirements. Cloud Integration: Experience integrating Databricks with other cloud services (e.g., Azure Data Lake Storage Gen2, Azure Synapse Analytics, Azure Key Vault) for a seamless data ecosystem Extensive experience with Spark (PySpark, Spark SQL) for large-scale data processing Additional Information: Qualifications - BE, MS, M.Tech or MCA. Certifications: Databricks Certified Associat

Posted 2 weeks ago

Apply

3.0 - 6.0 years

8 - 13 Lacs

Bengaluru

Work from Office

KPMG India is looking for Azure Data Engieer - Assistant Manager Azure Data Engieer - Assistant Manager to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Key Responsibilities Must have 4+ years of experience in Design, Development, Testing, and Deployment: Lead the creation of scalable Data & AI applications using best practices in software engineering such as automation, version control, and CI/CD. Develop and implement rigorous testing strategies to ensure application reliability and performance. Oversee deployment processes, addressing issues related to configuration, environment, or security. Engineering and Analytics: Translate Data & AI use case requirements into effective data models and pipelines, ensuring data integrity through statistical quality procedures and advanced AI techniques. API & Microservice Development: Architect and build secure, scalable microservices and APIs, ensuring broad usability, security, and adherence to best practices in documentation and version control. Platform Scalability & Optimization: Evaluate and select optimal technologies for cloud and on-premise deployments, implementing strategies for scalability, performance monitoring, and cost optimization. Knowledge of machine learning frameworks (TensorFlow, PyTorch, Keras) Understanding of MLOps (machine learning operations) and continuous integration/deployment (CI/CD) Familiarity with deployment tools (Docker, Kubernetes) Technologies: Demonstrate expertise with Data & AI technologies (e.g., Spark, , Databricks), programming languages (Java, Scala, SQL), API development patterns (e.g., HTTP/REST, GraphQL), and cloud platforms (Azure) Good to have skills: Technologies: Demonstrate expertise with Data & AI technologies (e.g.Kafka, , Snowflake), programming languages (Python, SQL), API development patterns (e.g., HTTP/REST, GraphQL). Location: IND:KA:Bengaluru / Innovator Building, Itpb, Whitefield Rd - Adm: Intl Tech Park, Innovator Bldg Job ID R-74975 Date posted 07/15/2025

Posted 2 weeks ago

Apply

5.0 - 9.0 years

12 - 17 Lacs

Noida

Work from Office

Spark/PySpark Technical hands on data processing Table designing knowledge using Hive - similar to RDBMS knowledge Database SQL knowledge for retrieval of data - transformation queries such as joins (full, left, right), ranking, group by Good Communication skills. Additional skills - GitHub, Jenkins, shell scripting would be added advantage Mandatory Competencies Big Data - Big Data - Pyspark Big Data - Big Data - SPARK Big Data - Big Data - Hadoop Big Data - Big Data - HIVE DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Jenkins Beh - Communication and collaboration Database - Database Programming - SQL DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - GitLab,Github, Bitbucket DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Basic Bash/Shell script writing

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Gurugram

Work from Office

Why Join Us To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees passion for travel and ensure a rewarding career journey. We re building a more open world. Join us. Introduction to team Expedia Product & Technology builds innovative products, services, and tools to deliver high-quality experiences for travellers, partners, and our employees. A singular technology platform powered by data and machine learning provides secure, differentiated, and personalised experiences for the traveler and our partners that drive loyalty and customer satisfaction. In this role, you will: Join a high-performing team and have a unique opportunity to make a highly visible impact Learn best practices and how to constantly raise the bar in terms of engineering excellence Identify inefficiencies in code or systems operation and offer suggestions for improvements Expand your skills in developing high quality, distributed and scalable software Share new skills and knowledge with the team to increase efficiency Write code that is clean, maintainable, and optimized with good naming conventions Develop fast, scalable, and highly available services Participate in code reviews and pull requests Experience and qualifications: 5+ years software development work experience using modern languages i.e. Java/Kotlin Bachelor s in computer science or related technical field; or equivalent related professional experience Experience in Java, Scala, Kotlin, AWS, Kafka, S3, Lambda, Docker, Datadog Problem solver with a good understanding of algorithms, data structures, and distributed applications Solid understanding of Object-Oriented Programming concepts, data structure, algorithms, and test-driven development Solid understanding of load balancing, caching, database partitioning, caching to improve application scalability Demonstrated ability to develop and support large-sized internet-scale software systems Experience in AWS Service Knowledge of No-SQL databases and cloud computing concepts Sound understanding of client-side optimization best practices Ability to quickly pick up new technologies, languages with ease Working knowledge of Agile Software Development methodologies Strong verbal and written communication skills Passionate about quality of work Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request . We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Groups family of brands includes: Brand Expedia , Hotels.com , Expedia Partner Solutions, Vrbo , trivago , Orbitz , Travelocity , Hotwire , Wotif , ebookers , CheapTickets , Expedia Group Media Solutions, Expedia Local Expert , CarRentals.com , and Expedia Cruises . 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. . Never provide sensitive, personal information to someone unless you re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs . Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies