Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 3.0 years
0 - 0 Lacs
Vadodara
Remote
Vadodara, Gujarat, India Job Title: AI/ML Engineer Location: Job Type: Full-time Experience Level: 2-3 years Salary Range : 60-75k Job Summary: We are looking for a skilled and motivated AI/ML Engineer to join our team. The ideal candidate will design, develop, and implement machine learning models and AI-driven solutions to solve complex business problems. You will collaborate with cross-functional teams to bring scalable and innovative AI products to production. Key Responsibilities: Design, build, and deploy machine learning models and algorithms. Work with large datasets to extract meaningful patterns and insights. Collaborate with data engineers to ensure efficient data pipelines. Conduct experiments and perform model evaluation and optimization. Integrate ML models into production systems. Stay updated with the latest research and developments in AI/ML. Create documentation and communicate findings and models effectively. Requirements: Technical Skills: Proficiency in Python and ML libraries (e.g., scikit-learn, TensorFlow, PyTorch). Experience with data preprocessing, feature engineering, and model evaluation. Knowledge of deep learning, NLP, computer vision, or reinforcement learning (as per role focus). Familiarity with cloud platforms (AWS, GCP, Azure) and MLOps tools. Experience with databases (SQL, NoSQL) and version control (Git). Education & Experience: Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or related field. 2-3 years of experience in machine learning or AI roles. Soft Skills: Strong problem-solving and analytical skills. Ability to work independently and in a team environment. Excellent communication and documentation skills. Preferred Qualifications: Publications or contributions to open-source ML projects. Experience in deploying models using Docker/Kubernetes. Familiarity with ML lifecycle tools like MLflow, Kubeflow, or Airflow. What We Offer: Competitive salary and benefits. Opportunity to work on cutting-edge AI technologies. Flexible working hours and remote options. Supportive and innovative team environment. Why join us? As all our products are enterprise-grade solutions so you get to work with the latest technologies to compete with the world. Flexible working hours Work-life balance 5 Days Working Job Type: Full-time Location : Vadodara, Gujarat Qualification: Bachelor's degree or equivalent Mode of working: Remote [WFH]
Posted 1 month ago
2.0 - 3.0 years
0 - 0 Lacs
Vadodara
Remote
AI/ML Engineer Vadodara, Gujarat, India Job Title: AI/ML Engineer Location: Job Type: Full-time Experience Level: 2-3 years Salary Range : 60-75k Job Summary: We are looking for a skilled and motivated AI/ML Engineer to join our team. The ideal candidate will design, develop, and implement machine learning models and AI-driven solutions to solve complex business problems. You will collaborate with cross-functional teams to bring scalable and innovative AI products to production. Key Responsibilities: Design, build, and deploy machine learning models and algorithms. Work with large datasets to extract meaningful patterns and insights. Collaborate with data engineers to ensure efficient data pipelines. Conduct experiments and perform model evaluation and optimization. Integrate ML models into production systems. Stay updated with the latest research and developments in AI/ML. Create documentation and communicate findings and models effectively. Requirements: Technical Skills: Proficiency in Python and ML libraries (e.g., scikit-learn, TensorFlow, PyTorch). Experience with data preprocessing, feature engineering, and model evaluation. Knowledge of deep learning, NLP, computer vision, or reinforcement learning (as per role focus). Familiarity with cloud platforms (AWS, GCP, Azure) and MLOps tools. Experience with databases (SQL, NoSQL) and version control (Git). Education & Experience: Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or related field. 2-3 years of experience in machine learning or AI roles. Soft Skills: Strong problem-solving and analytical skills. Ability to work independently and in a team environment. Excellent communication and documentation skills. Preferred Qualifications: Publications or contributions to open-source ML projects. Experience in deploying models using Docker/Kubernetes. Familiarity with ML lifecycle tools like MLflow, Kubeflow, or Airflow. What We Offer: Competitive salary and benefits. Opportunity to work on cutting-edge AI technologies. Flexible working hours and remote options. Supportive and innovative team environment. Why join us? As all our products are enterprise-grade solutions so you get to work with the latest technologies to compete with the world. Flexible working hours Work-life balance 5 Days Working Job Type: Full-time Location : Vadodara, Gujarat Qualification: Bachelor's degree or equivalent Mode of working: Remote [WFH]
Posted 1 month ago
7.0 years
0 Lacs
Chhattisgarh, India
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less
Posted 1 month ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
P-375 At Databricks, we are passionate about enabling data teams to solve the world's toughest problems — from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the world's best data and AI infrastructure platform so our customers can use deep data insights to improve their business. Founded by engineers — and customer obsessed — we leap at every opportunity to solve technical challenges, from designing next-gen UI/UX for interfacing with data to scaling our services and infrastructure across millions of virtual machines. Databricks Mosaic AI offers a unique data-centric approach to building enterprise-quality, Machine Learning and Generative AI solutions, enabling organizations to securely and cost-effectively own and host ML and Generative AI models, augmented or trained with their enterprise data. And we're only getting started in Bengaluru , India - and currently in the process of setting up 10 new teams from scratch ! As a Senior Software Engineer at Databricks India, you can get to work across : Backend DDS (Distributed Data Systems) Full Stack The Impact You'll Have Our Backend teams span many domains across our essential service platforms. For instance, you might work on challenges such as: Problems that span from product to infrastructure including: distributed systems, at-scale service architecture and monitoring, workflow orchestration, and developer experience. Deliver reliable and high performance services and client libraries for storing and accessing humongous amount of data on cloud storage backends, e.g., AWS S3, Azure Blob Store. Build reliable, scalable services, e.g. Scala, Kubernetes, and data pipelines, e.g. Apache Spark™, Databricks, to power the pricing infrastructure that serves millions of cluster-hours per day and develop product features that empower customers to easily view and control platform usage. Our DDS team spans across: Apache Spark™ Data Plane Storage Delta Lake Delta Pipelines Performance Engineering As a Full Stack software engineer, you will work closely with your team and product management to bring that delight through great user experience. What We Look For BS (or higher) in Computer Science, or a related field 6+ years of production level experience in one of: Python, Java, Scala, C++, or similar language. Experience developing large-scale distributed systems from scratch Experience working on a SaaS platform or with Service-Oriented Architectures. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less
Posted 1 month ago
7.0 years
0 Lacs
Andhra Pradesh, India
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Head - Python Engineering Job Summary: We are looking for a skilled Python, AI/ML Developer with 8 to 12 years of experience to design, develop, and maintain high-quality back-end systems and applications. The ideal candidate will have expertise in Python and related frameworks, with a focus on building scalable, secure, and efficient software solutions. This role requires a strong problem-solving mindset, collaboration with cross-functional teams, and a commitment to delivering innovative solutions that meet business objectives. Responsibilities Application and Back-End Development: Design, implement, and maintain back-end systems and APIs using Python frameworks such as Django, Flask, or FastAPI, focusing on scalability, security, and efficiency. Build and integrate scalable RESTful APIs, ensuring seamless interaction between front-end systems and back-end services. Write modular, reusable, and testable code following Python’s PEP 8 coding standards and industry best practices. Develop and optimize robust database schemas for relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB), ensuring efficient data storage and retrieval. Leverage cloud platforms like AWS, Azure, or Google Cloud for deploying scalable back-end solutions. Implement caching mechanisms using tools like Redis or Memcached to optimize performance and reduce latency. AI/ML Development: Build, train, and deploy machine learning (ML) models for real-world applications, such as predictive analytics, anomaly detection, natural language processing (NLP), recommendation systems, and computer vision. Work with popular machine learning and AI libraries/frameworks, including TensorFlow, PyTorch, Keras, and scikit-learn, to design custom models tailored to business needs. Process, clean, and analyze large datasets using Python tools such as Pandas, NumPy, and PySpark to enable efficient data preparation and feature engineering. Develop and maintain pipelines for data preprocessing, model training, validation, and deployment using tools like MLflow, Apache Airflow, or Kubeflow. Deploy AI/ML models into production environments and expose them as RESTful or GraphQL APIs for integration with other services. Optimize machine learning models to reduce computational costs and ensure smooth operation in production systems. Collaborate with data scientists and analysts to validate models, assess their performance, and ensure their alignment with business objectives. Implement model monitoring and lifecycle management to maintain accuracy over time, addressing data drift and retraining models as necessary. Experiment with cutting-edge AI techniques such as deep learning, reinforcement learning, and generative models to identify innovative solutions for complex challenges. Ensure ethical AI practices, including transparency, bias mitigation, and fairness in deployed models. Performance Optimization and Debugging: Identify and resolve performance bottlenecks in applications and APIs to enhance efficiency. Use profiling tools to debug and optimize code for memory and speed improvements. Implement caching mechanisms to reduce latency and improve application responsiveness. Testing, Deployment, and Maintenance: Write and maintain unit tests, integration tests, and end-to-end tests using Pytest, Unittest, or Nose. Collaborate on setting up CI/CD pipelines to automate testing, building, and deployment processes. Deploy and manage applications in production environments with a focus on security, monitoring, and reliability. Monitor and troubleshoot live systems, ensuring uptime and responsiveness. Collaboration and Teamwork: Work closely with front-end developers, designers, and product managers to implement new features and resolve issues. Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives, to ensure smooth project delivery. Provide mentorship and technical guidance to junior developers, promoting best practices and continuous improvement. Required Skills and Qualifications Technical Expertise: Strong proficiency in Python and its core libraries, with hands-on experience in frameworks such as Django, Flask, or FastAPI. Solid understanding of RESTful API development, integration, and optimization. Experience working with relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB). Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes. Expertise in using Git for version control and collaborating in distributed teams. Knowledge of CI/CD pipelines and tools like Jenkins, GitHub Actions, or CircleCI. Strong understanding of software development principles, including OOP, design patterns, and MVC architecture. Preferred Skills: Experience with asynchronous programming using libraries like asyncio, Celery, or RabbitMQ. Knowledge of data visualization tools (e.g., Matplotlib, Seaborn, Plotly) for generating insights. Exposure to machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn) is a plus. Familiarity with big data frameworks like Apache Spark or Hadoop. Experience with serverless architecture using AWS Lambda, Azure Functions, or Google Cloud Run. Soft Skills: Strong problem-solving abilities with a keen eye for detail and quality. Excellent communication skills to effectively collaborate with cross-functional teams. Adaptability to changing project requirements and emerging technologies. Self-motivated with a passion for continuous learning and innovation. Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
The Data Engineering team within the AI, Data, and Analytics (AIDA) organization is the backbone of our data-driven sales and marketing operations. We provide the essential foundation for transformative insights and data innovation. By focusing on integration, curation, quality, and data expertise across diverse sources, we power world-class solutions that advance Pfizer’s mission. Join us in shaping a data-driven organization that makes a meaningful global impact. Role Summary We are seeking a technically adept and experienced Data Solutions Engineering Manager with a passion for developing data products and innovative solutions to create competitive advantages for Pfizer’s commercial business units. This role requires a strong technical background to ensure effective collaboration with engineering and developer team members. As a Data Solutions Engineer in our data lake/data warehousing team, you will play a crucial role in building data pipelines and processes that support data transformation, workload management, data structures, dependencies, and metadata management. This role will need to be able to work closely with stakeholders to understand their needs and work alongside them to ensure data being ingested meets the business user's needs and will well modeled and organized to promote scalable usage and good data hygiene. Work with complex and advanced data environments, employ the right architecture to handle data, and support various analytics use cases including business reporting, production data pipeline, machine learning, optimization models, statistical models, and simulations. The Data Solutions Engineering Manager will ensure data quality and integrity by validating and cleansing data, identifying, and resolving anomalies, implementing data quality checks, and conducting system integration testing (SIT) and user acceptance testing (UAT). The ideal candidate is a passionate and results-oriented product lead with a proven track record of delivering data-driven solutions for the pharmaceutical industry. Role Responsibilities Project solutioning, including scoping, and estimation. Data sourcing, investigation, and profiling. Prototyping and design thinking. Developing data pipelines & complex data workflows. Actively contribute to project documentation and playbook, including but not limited to physical models, conceptual models, data dictionaries and data cataloging. Accountable for engineering development of both internal and external facing data solutions by conforming to EDSE and Digital technology standards. Partner with internal / external partners to design, build and deliver best in class data products globally to improve the quality of our customer analytics and insights and the growth of commercial in its role in helping patients. Demonstrate outstanding collaboration and operational excellence. Drive best practices and world-class product capabilities. Qualifications Bachelor’s degree in a technical area such as computer science, engineering or management information science. 5+ years of combined data warehouse/data lake experience as hands on data engineer. 5+ years in developing data product and data features in servicing analytics and AI use cases Recent Healthcare Life Sciences (pharma preferred) and/or commercial/marketing data experience is highly preferred. Domain knowledge in the pharmaceutical industry preferred. Good knowledge of data governance and data cataloging best practices. Technical Skillset 5+ years of hands-on experience in working with SQL, Python, object-oriented scripting languages (e.g. Java, C++, etc..) in building data pipelines and processes. Proficiency in SQL programming, including the ability to create and debug stored procedures, functions, and views. 5+ years of hands-on experience delivering data lake/data warehousing projects. Experience in working with cloud native SQL and NoSQL database platforms. Snowflake experience is desirable. Experience in AWS services EC2, EMR, RDS, Spark is preferred. Solid understanding of Scrum/Agile is preferred and working knowledge of CI/CD, GitHub MLflow. Familiarity with data privacy standards, governance principles, data protection, pharma industry practices/GDPR compliance is preferred. Great communication skills. Great business influencing and stakeholder management skills. Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Greater Hyderabad Area
On-site
Job Title: Data Engineer (Snowflake + dbt) Location: Hyderabad, India Job Type: Full-time Job Description We are looking for an experienced and results-driven Data Engineer to join our growing Data Engineering team. The ideal candidate will be proficient in building scalable, high-performance data transformation pipelines using Snowflake and dbt and be able to effectively work in a consulting setup. In this role, you will be instrumental in ingesting, transforming, and delivering high-quality data to enable data-driven decision-making across the client’s organization. Key Responsibilities Design and build robust ELT pipelines using dbt on Snowflake, including ingestion from relational databases, APIs, cloud storage, and flat files. Reverse-engineer and optimize SAP Data Services (SAP DS) jobs to support scalable migration to cloud-based data platforms. Implement layered data architectures (e.g., staging, intermediate, mart layers) to enable reliable and reusable data assets. Enhance dbt/Snowflake workflows through performance optimization techniques such as clustering, partitioning, query profiling, and efficient SQL design. Use orchestration tools like Airflow, dbt Cloud, and Control-M to schedule, monitor, and manage data workflows. Apply modular SQL practices, testing, documentation, and Git-based CI/CD workflows for version-controlled, maintainable code. Collaborate with data analysts, scientists, and architects to gather requirements, document solutions, and deliver validated datasets. Contribute to internal knowledge sharing through reusable dbt components and participate in Agile ceremonies to support consulting delivery. Required Qualifications Data Engineering Skills 3–5 years of experience in data engineering, with hands-on experience in Snowflake and basic to intermediate proficiency in dbt. Capable of building and maintaining ELT pipelines using dbt and Snowflake with guidance on architecture and best practices. Understanding of ELT principles and foundational knowledge of data modeling techniques (preferably Kimball/Dimensional). Intermediate experience with SAP Data Services (SAP DS) , including extracting, transforming, and integrating data from legacy systems. Proficient in SQL for data transformation and basic performance tuning in Snowflake (e.g., clustering, partitioning, materializations). Familiar with workflow orchestration tools like dbt Cloud, Airflow, or Control M. Experience using Git for version control and exposure to CI/CD workflows in team environments. Exposure to cloud storage solutions such as Azure Data Lake, AWS S3, or GCS for ingestion and external staging in Snowflake. Working knowledge of Python for basic automation and data manipulation tasks. Understanding of Snowflake's role-based access control (RBAC), data security features, and general data privacy practices like GDPR. Data Quality & Documentation Familiar with dbt testing and documentation practices (e.g., dbt tests, dbt docs). Awareness of standard data validation and monitoring techniques for reliable pipeline development. Soft Skills & Collaboration Strong problem-solving skills and ability to debug SQL and transformation logic effectively. Able to document work clearly and communicate technical solutions to a cross-functional team. Experience working in Agile settings, participating in sprints, and handling shifting priorities. Comfortable collaborating with analysts, data scientists, and architects across onshore/offshore teams. High attention to detail, proactive attitude, and adaptability in dynamic project environments. Nice to Have Experience working in client-facing or consulting roles. Exposure to AI/ML data pipelines or tools like feature stores and MLflow. Familiarity with enterprise-grade data quality tools Education: Bachelor’s or master’s degree in computer science, Data Engineering, or a related field. Certifications such as Snowflake SnowPro, dbt Certified Developer Data Engineering are a plus. Why Join Us? Opportunity to work on diverse and challenging projects in a consulting environment. Collaborative work culture that values innovation and curiosity. Access to cutting-edge technologies and a focus on professional development. Competitive compensation and benefits package. Be part of a dynamic team delivering impactful data solutions. Why Join Us? Opportunity to work on diverse and challenging projects in a consulting environment. Collaborative work culture that values innovation and curiosity. Access to cutting-edge technologies and a focus on professional development. Competitive compensation and benefits package. Be part of a dynamic team delivering impactful data solutions. Show more Show less
Posted 1 month ago
0.0 years
0 Lacs
Panaji, Goa
On-site
Education: Bachelor’s or master’s in computer science, Software Engineering, or a related field (or equivalent practical experience). Hands-On ML/AI Experience: Proven record of deploying, fine-tuning, or integrating large-scale NLP models or other advanced ML solutions. Programming & Frameworks: Strong proficiency in Python (PyTorch or TensorFlow) and familiarity with MLOps tools (e.g., Airflow, MLflow, Docker). Security & Compliance: Understanding of data privacy frameworks, encryption, and secure data handling practices, especially for sensitive internal documents. DevOps Knowledge: Comfortable setting up continuous integration/continuous delivery (CI/CD) pipelines, container orchestration (Kubernetes), and version control (Git). Collaborative Mindset: Experience working cross-functionally with technical and non-technical teams; ability to clearly communicate complex AI concepts. Role Overview Collaborate with cross-functional teams to build AI-driven applications for improved productivity and reporting. Lead integrations with hosted AI solutions (ChatGPT, Claude, Grok) for immediate functionality without transmitting sensitive data while laying the groundwork for a robust in-house AI infrastructure. Develop and maintain on-premises large language model (LLM) solutions (e.g. Llama) to ensure data privacy and secure intellectual property. Key Responsibilities LLM Pipeline Ownership: Set up, fine-tune, and deploy on-prem LLMs; manage data ingestion, cleaning, and maintenance for domain-specific knowledge bases. Data Governance & Security: Assist our IT department to implement role-based access controls, encryption protocols, and best practices to protect sensitive engineering data. Infrastructure & Tooling: Oversee hardware/server configurations (or cloud alternatives) for AI workloads; evaluate resource usage and optimize model performance. Software Development: Build and maintain internal AI-driven applications and services (e.g., automated report generation, advanced analytics, RAG interfaces, as well as custom desktop applications). Integration & Automation: Collaborate with project managers and domain experts to automate routine deliverables (reports, proposals, calculations) and speed up existing workflows. Best Practices & Documentation: Define coding standards, maintain technical documentation, and champion CI/CD and DevOps practices for AI software. Team Support & Training: Provide guidance to data analysts and junior developers on AI tool usage, ensuring alignment with internal policies and limiting model “hallucinations.” Performance Monitoring: Track AI system metrics (speed, accuracy, utilization) and implement updates or retraining as necessary. Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹40,000.00 per month Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Yearly bonus Work Location: In person Application Deadline: 30/06/2025 Expected Start Date: 10/06/2025
Posted 1 month ago
4.0 - 8.0 years
6 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
We specialize in delivering high-quality human-curated data and AI-first scaled operations services Based in San Francisco and Hyderabad, we are a fast-moving team on a mission to build AI for Good, driving innovation and societal impact Role Overview: We are looking for a Data Scientist to join and build intelligent, data-driven solutions for our client that enable impactful decisions This role requires contributions across the data science lifecycle from data wrangling and exploratory analysis to building and deploying machine learning models Whether youre just getting started or have years of experience, were looking for individuals who are curious, analytical, and driven to make a difference with data Responsibilities: Design, develop, and deploy machine learning models and analytical solutions Conduct exploratory data analysis and feature engineering Own or contribute to the end-to-end data science pipeline: data cleaning, modeling, validation, and deployment Collaborate with cross-functional teams (engineering, product, business) to define problems and deliver measurable impact Translate business challenges into data science problems and communicate findings clearly Implement A/B tests, statistical tests, and experimentation strategies Support model monitoring, versioning, and continuous improvement in production environments Evaluate new tools, frameworks, and best practices to improve model accuracy and scalability Required Skills: Strong programming skills in Python including libraries such as pandas, NumPy, scikit-learn, matplotlib, seaborn Proficient in SQL, comfortable querying large, complex datasets Sound understanding of statistics, machine learning algorithms, and data modeling Experience building end-to-end ML pipelines Exposure to or hands-on experience with model deployment tools like FastAPI, Flask, MLflow Experience with data visualization and insight communication Familiarity with version control tools (eg, Git) and collaborative workflows Ability to write clean, modular code and document processes clearly Nice to Have: Experience with deep learning frameworks like TensorFlow or PyTorch Familiarity with data engineering tools like Apache Spark, Kafka, Airflow, dbt Exposure to MLOps practices and managing models in production environments Working knowledge of cloud platforms like AWS, GCP, or Azure (e, SageMaker, BigQuery, Vertex AI) Experience designing and interpreting A/B tests or causal inference models Prior experience in high-growth startups or cross-functional leadership roles Educational Qualifications: Bachelors or Masters degree in Computer Science, Data Science, Mathematics, Engineering, or a related field Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,India
Posted 1 month ago
4.0 years
0 Lacs
India
Remote
Mandatory Skills ✅ Python – Minimum 4+ years of hands-on experience ✅ AI/ML – Minimum 5+ years of strong experience in designing and implementing machine learning models, algorithms, and AI-driven solutions ✅ SQL – Minimum 2+ years of experience working with large datasets and query optimization Key Responsibilities Lead the development of advanced AI/ML models for real-world applications Collaborate with data scientists, analysts, and software engineers to deploy end-to-end data-driven solutions Design scalable machine learning pipelines and automate model deployment Work on feature engineering, model tuning, and performance optimization Ensure best practices in AI/ML model governance, performance monitoring, and retraining Preferred Qualifications Experience with ML Ops tools (e.g., MLflow, Kubeflow) Strong knowledge of data preprocessing, feature extraction, and model interpretability Exposure to cloud platforms (AWS, Azure, or GCP) Familiarity with deep learning frameworks (TensorFlow, PyTorch, etc.) is a plus 💡 Work Mode: Flexible – Choose to work from our Cochin , Trivandrum office or fully remote ! 📅 Start ASAP! We’re only considering candidates with a notice period of 0–30 days. Skills: data science,deep learning frameworks,ai,python,feature extraction,ai/ml,sql,cloud platforms,model interpretability,data preprocessing,ml,ml ops tools Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description and Requirements "At BMC trust is not just a word - it's a way of life!" We are an award-winning, equal opportunity, culturally diverse, fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities, because we know you will bring your best every day. We will champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! The DSOM product line includes BMC’s industry-leading Digital Services and Operation Management products. We have many interesting SaaS products, in the fields of: Predictive IT service management, Automatic discovery of inventories, intelligent operations management, and more! We continuously grow by adding and implementing the most cutting-edge technologies and investing in Innovation! Our team is a global and versatile group of professionals, and we LOVE to hear our employees’ innovative ideas. So, if Innovation is close to your heart – this is the place for you! BMC is looking for an experienced Data Science Engineer with hands-on experience with Classical ML, Deep Learning Networks and Large Language Models, knowledge to join us and design, develop, and implement microservice based edge applications , using the latest technologies. In this role, you will be responsible for End-to-end design and execution of BMC Data Science tasks, while acting as a focal point and expert for our data science activities. You will research and interpret business needs, develop predictive models, and deploy completed solutions. You will provide expertise and recommendations for plans, programs, advance analysis, strategies, and policies. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Ideate, design, implement and maintain enterprise business software platform for edge and cloud, with a focus on Machine Learning and Generative AI Capabilities , using mainly P ython Work with a globally distributed development team to perform requirements analysis, write design documents, design, develop and test software development projects. Understand real world deployment and usage scenarios from customers and product managers and translate them to AI/ML features that drive value of the product. Work closely with product managers and architects to understand requirements, present options, and design solutions. Work closely with customers and partners to analyze time-series data and suggest the right approaches to drive adoption. Analyze and clearly communicate both verbally and in written form the status of projects or issues along with risks and options to the stakeholders. To ensure you’re set up for success, you will bring the following skillset & experience: You have 8 + years of hands-on experience in data science or machine learning roles. You have experience working with sensor data, time-series analysis, predictive maintenance, anomaly detection, or similar IoT-specific domains. You have strong understanding of the entire ML lifecycle: data collection, preprocessing, model training, deployment, monitoring, and continuous improvement. You have proven experience designing and deploying AI/ML models in real-world IoT or edge computing environments. You have strong knowledge of machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch , XGBoost ). Whilst these are nice to have, our team can help you develop in the following skills: Experience with digital twins, real-time analytics, or streaming data systems. Contribution to open-source ML/AI/IoT projects or relevant publications. Experience with Agile development methodology and best practice in unit testin Experience with Kubernetes (kubectl, helm) will be an advantage. Experience with cloud platforms (AWS, Azure, GCP) and tools for ML deployment (SageMaker, Vertex AI, MLflow, etc.). Our commitment to you! BMC’s culture is built around its people. We have 6000+ brilliant minds working together across the globe. You won’t be known just by your employee number, but for your true authentic self. BMC lets you be YOU! If after reading the above, You’re unsure if you meet the qualifications of this role but are deeply excited about BMC and this team, we still encourage you to apply! We want to attract talents from diverse backgrounds and experience to ensure we face the world together with the best ideas! BMC is committed to equal opportunity employment regardless of race, age, sex, creed, color, religion, citizenship status, sexual orientation, gender, gender expression, gender identity, national origin, disability, marital status, pregnancy, disabled veteran or status as a protected veteran. If you need a reasonable accommodation for any part of the application and hiring process, visit the accommodation request page. < Back to search results BMC Software maintains a strict policy of not requesting any form of payment in exchange for employment opportunities, upholding a fair and ethical hiring process. At BMC we believe in pay transparency and have set the midpoint of the salary band for this role at 8,047,800 INR. Actual salaries depend on a wide range of factors that are considered in making compensation decisions, including but not limited to skill sets; experience and training, licensure, and certifications; and other business and organizational needs. The salary listed is just one component of BMC's employee compensation package. Other rewards may include a variable plan and country specific benefits. We are committed to ensuring that our employees are paid fairly and equitably, and that we are transparent about our compensation practices. ( Returnship@BMC ) Had a break in your career? No worries. This role is eligible for candidates who have taken a break in their career and want to re-enter the workforce. If your expertise matches the above job, visit to https://bmcrecruit.avature.net/returnship know more and how to apply. Show more Show less
Posted 1 month ago
7.0 years
4 - 6 Lacs
Bengaluru
On-site
FEQ326R387 Databricks is looking for a motivated Sr. Strategy and Operations Manager to join our Field Engineering team that helps define Go-To-Market (GTM) strategy, provides strategic analyses and instills operational thoughtfulness to our successful & fast-growing GTM business. You will use data and qualitative information to help our Sales and Field Engineering Leadership manage the business. You will support essential aspects of our GTM design and annual planning process. You will help build our data and analytics foundation including executive reporting, health of business reviews, dashboards, and Indicators. You will work with our Field Engineering, Sales, Finance, Data, Marketing, Order Ops, and other GTM teams. You will report to the Sr. Director, Strategy and Ops. The impact you will have: Establish the GTM Strategy and build processes in place to ensure we have the right investments at the right time Be a trusted partner to the GTM Leadership by defining, tracking, and implementing goals, programs and strategies that scale Guide annual GTM planning process (FY and long-range modeling, investment Return on investment analysis, HC planning, capacity setting) Design and manage the headcount forecasting process (FY, long-range, and quarterly modeling) Lead executive analyses, strategies, and deliverables (e.g., board materials, QBR) Provide visibility and performance tracking to the business (Operational councils, Ops reviews, rep. efficiency and productivity Indicators, dashboards) Help build our data and reporting foundation Improve operational efficiency by automating and improving processes, and dashboards that scale What we look for: 7+ years of strategy & operations experience (Investment banking, strategy consulting, FP&A, business operations, Enterprise/Mid-Market SaaS experience) Ability and passion to analyze, set priorities, and solve complex problems Propensity to summarize complex concepts and data and present clear information to executives, teams, and internal customers Data expert: Querying and scoping (SQL, Databricks, BigQuery), analysis (Excel), summarizing (pivot tables, charts, slides, written explanation), reporting (dashboards, repositories) Process expert: Envision E2E process change to solve our needs, drive agreement, document requirements, guide execution in partnership with IT, perform UAT, and report on progress Proficient in BI and sales tools (eg: Salesforce, Tableau etc.) Passion for building and working in a high performance global team A fondness for customer service and patience About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Posted 1 month ago
1.0 years
4 - 5 Lacs
India
On-site
Job Description We are seeking an AI/ML Engineer - I to design, develop, and deploy machine learning models and AI solutions. This role involves working with modern ML frameworks, large language models (LLMs), and building scalable AI applications. Key Responsibilities Develop and deploy machine learning models using TensorFlow, PyTorch, and Scikit-Learn Build end-to-end ML pipelines from data preprocessing to model serving Work with LLMs, implement RAG systems, and develop AI-driven applications Create REST APIs using FastAPI or Flask and deploy using Docker and Kubernetes Process and analyze large datasets using SQL, Pandas, and Spark Implement MLOps practices such as model monitoring, logging, and versioning Required Skills Programming & Development Python – Strong proficiency, including OOP principles Software Architecture – Familiarity with frontend/backend components, microservices, and API design Version Control – Proficient with Git and platforms like GitHub or GitLab ML/AI Technologies ML Frameworks – TensorFlow, PyTorch, Scikit-Learn, XGBoost LLM Expertise – Understanding of transformer architectures, fine-tuning, and prompt engineering Vector Databases – Experience with Pinecone, Weaviate, or Chroma for embeddings and similarity search Data & Infrastructure Data Handling – Pandas, NumPy, SQL, Apache Spark Cloud Platforms – Experience with AWS, GCP, or Azure Deployment – Docker, Kubernetes, FastAPI, Flask, REST APIs Mathematical Foundation Solid understanding of linear algebra, statistics, and calculus as applied to ML algorithms Soft Skills Strong problem-solving abilities, clear technical communication, and collaboration across teams Preferred Qualifications Bachelor's or Master’s degree in Computer Science, Data Science, or a related field Approximately 1 year of experience in ML/AI development Familiarity with MLOps tools like MLflow or Weights & Biases Knowledge of computer vision and/or NLP applications Additional Technical Areas Understanding of data privacy and security best practices Experience with streaming data processing tools such as Kafka or Apache Beam Educational Qualifications Bachelor’s degree in B.Tech/B.E Job Types: Full-time, Permanent Pay: ₹477,103.79 - ₹507,202.10 per year Benefits: Provident Fund Schedule: Monday to Friday Supplemental Pay: Yearly bonus Experience: AI Development : 1 year (Preferred) ML Engineer : 1 year (Preferred) Work Location: In person
Posted 1 month ago
2.0 years
22 Lacs
Pune
On-site
As an AI Engineer, you will use modern data architecture, algorithms, and processes to help our customers meet key business objectives. In this role, you may create, review, and optimise algorithms and processes for our customers in the 0-1 and 1-n journey. Responsibilities: Architect: Choose the right tools, frameworks, and cloud services to meet business goals. Educate: Advise and educate customers on how to use different data engineering, AI/ML algorithms, strategies, and processes from the many options available. Build: Build a data pipeline that processes, stores, integrates, and analyzes large volumes of data in record time. Create visualisations and insights from the data in order to take informed and data-backed business decisions. Build or leverage AI/ML algorithms to solve real business problems. Communicate: Proactively communicate with your team. Raise blockers, brainstorm solutions, and seek early feedback. Review: Participate in peer reviews to ensure quality deliverables. Tests, Monitoring, and Observability: Write test suites and build automated CI and CD pipelines to deliver more releases and reduce manual effort. Create automated mechanisms to evaluate models with changing variable conditions. Bake in observability and monitoring to ensure outputs are in line with expectations. Learn: Learn from the practices followed by other teams and evangelize your learnings. Showcase: Share your learnings on internal and customer projects via articles, case studies, books, and webinars. Requirements: Full Stack Development is a must. Experience with relevant frontend, backend frameworks like React, Next, Node/golang/Python is a must. Experience with cloud providers like AWS/GCP is a must. Proficiency in designing, deploying, fine-tuning, and evaluating LLMs (including RAG and vectorDB integrations) with built-in observability and monitoring. Hands-on experience with AI/MLOps workflows and tools (e. g., MLflow, Kubeflow). Background in multi-agent orchestration and multimodal AI systems. Experience creating AI workflows using orchestration platforms (e. g., n8n, relay.app ) and leveraging a variety of AI tools and plugins (e. g., Byword, Exa, Clay). Strong Python expertise with two or more libraries/frameworks (e. g., scikitlearn, TensorFlow, PyTorch, Keras, NLTK/SpaCy, Hugging Face). Proven track record of optimizing models and measuring their business impact (performance metrics and ROI). Resilience and adaptability in ambiguous, fast-moving environments. Ability to mentor and coach colleagues when needed. Job Types: Full-time, Permanent Pay: Up to ₹2,200,000.00 per year Schedule: Monday to Friday Experience: Total Work: 2 years (Required) Full-stack development: 1 year (Preferred) Work Location: In person
Posted 1 month ago
4.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Job Title: AI/ML Engineer Experience Required: 3–4 Years Location: Indore-Onsite Department: Artificial Intelligence / Machine Learning Reports To: CTO / Head of AI About the role: We are seeking a highly skilled and innovative AI/ML Engineer with 3–4 years of hands-on experience in building and deploying AI-powered solutions. The ideal candidate should have a strong foundation in industrial automation , computer vision , LLM-based applications , and be proficient in modern AI tools such as LangChain , LangGraph , Vision Transformers , and AWS SageMaker MLOps . Experience in multi-agent chat systems and RAG architectures is a plus. Key Responsibilities: Design, train, and deploy ML models for industrial automation using OpenCV and deep learning. Develop multi-agent chat applications with LLMs, React-based agents, and contextual memory. Implement Vision Transformers (ViTs) for advanced computer vision tasks. Build intelligent conversational systems using LangChain , LangGraph , RAG , and vector databases. Fine-tune pre-trained LLMs for specific enterprise applications. Collaborate with frontend teams to integrate React-based UIs with AI backends. Deploy and manage AI solutions on AWS (SageMaker, Lambda, S3, EC2) . Maintain performance, scalability, and reliability in production-grade AI systems. Required Skills: 3–4 years of AI/ML engineering experience with a focus on real-world applications. Strong command of Python , and libraries like PyTorch , TensorFlow , Scikit-learn . In-depth knowledge of LLMs (e.g., GPT, Claude, LLaMA), prompt engineering , and fine-tuning . Proficiency in LangChain , LangGraph , and RAG-based architectures . Experience with Vision Transformers , YOLO , Detectron2 , and related CV tools. Ability to build and connect intelligent UIs using React + backend AI systems . Hands-on experience with AWS services (SageMaker, Lambda, EC2, S3). Familiarity with CI/CD workflows for ML models and production deployments. Preferred Qualifications: Exposure to edge AI , NVIDIA Jetson , or industrial IoT integrations . Experience building AI-powered chatbots with memory and tool integrations . Working knowledge of Docker , MLflow , or DVC for model versioning and containerization. Contributions to open-source AI/ML projects or research publications. Send your applications to vishal.bhat@moreyeahs.com or Contact - + 91-9644334475 Show more Show less
Posted 1 month ago
4.0 - 8.0 years
6 - 10 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
We are looking for Indias top 1% Machine Learning Engineers for a unique job opportunity to work with the industry leaders Who can be a part of the community? We are looking for top-tier Machine Learning Engineers with expertise in building, deploying, and optimizing AI models If you have experience in this field then this is your chance to collaborate with industry leaders Whats in it for you? Pay above market standards The role is going to be contract based with project timelines from 2-6 months, or freelancing Be a part of Elite Community of professionals who can solve complex AI challenges Responsibilities: Design, optimize, and deploy machine learning models; implement feature engineering and scaling pipelines Use deep learning frameworks (TensorFlow, PyTorch) and manage models in production (Docker, Kubernetes) Automate workflows, ensure model versioning, logging, and real-time monitoring; comply with security and regulations Work with large-scale data, develop feature stores, and implement CI/CD pipelines for model retraining and performance tracking Required Skills: Proficiency in machine learning, deep learning, and data engineering (Spark, Kafka) Expertise in MLOps, automation tools (Docker, Kubernetes, Kubeflow, MLflow, TFX), and cloud platforms (AWS, GCP, Azure) Strong knowledge of model deployment, monitoring, security, compliance, and responsible AI practices Nice to Have: Experience with A/B testing, Bayesian optimization, and hyperparameter tuning Familiarity with multi-cloud ML deployments and generative AI technologies (LLM fine-tuning, FAISS). Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, India
Posted 1 month ago
12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
CSQ226R150 As a Manager for the Global Technical Services team, you will lead a high-performing team of Solution Engineers dedicated to supporting our customers across the Asia Pacific region. You will partner closely with the Asia Pacific Sales and Field Engineering organization to deliver outstanding technical services that drive customer satisfaction, support business growth, and ensure operational efficiency. You will report to the Area Vice President, Global Technical Services. The Impact You Will Have With Us Manage hiring and building the Global Technical Services team in Asia Pacific, thus scaling the organization and driving efficiency. Build a motivated collaborative culture within a hyper-growth team to embody and promote Databrick's customer-obsessed, teamwork and diverse culture. Manage the distribution and allocation of work across the team. Build partnerships with cross functional stakeholders across Field Engineering and Sales to position services that drive customer growth and product adoption. Conduct operational reviews of the team’s deliverables to ensure alignment with strategic priorities and outcomes. What We Look For 12+ years of relevant industry experience A minimum of 5-6 years of client facing experience in Cloud and/or Data/ AI space.3+ years of experience leading technical teams, in a technology or consulting organization. Have experience scaling and mentoring field or technical teams from scratch at hyper-growth speed. Knowledgeable in and passionate about data-driven decisions, AI, and Cloud software models. Great at instituting processes for technical field members to lead efficiency. Technical background either in Data & AI or Cloud technologies. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less
Posted 1 month ago
4.0 - 8.0 years
5 - 8 Lacs
Hyderabad, Bengaluru
Work from Office
Why Join? Above market-standard compensation Contract-based or freelance opportunities (212 months) Work with industry leaders solving real AI challenges Flexible work locations Remote | Onsite | Hyderabad/Bangalore Your Role: Architect and optimize ML infrastructure with Kubeflow, MLflow, SageMaker Pipelines Build CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI/CD) Automate ML workflows (feature engineering, retraining, deployment) Scale ML models with Docker, Kubernetes, Airflow Ensure model observability, security, and cost optimization in cloud (AWS/GCP/Azure) Must-Have Skills: Proficiency in Python, TensorFlow, PyTorch, CI/CD pipelines Hands-on experience with cloud ML platforms (AWS SageMaker, GCP Vertex AI, Azure ML) Expertise in monitoring tools (MLflow, Prometheus, Grafana) Knowledge of distributed data processing (Spark, Kafka) (Bonus: Experience in A/B testing, canary deployments, serverless ML)
Posted 1 month ago
12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
P-995 At Databricks, we are passionate about enabling data teams to solve the world's toughest problems — from making the next mode of transportation a reality to accelerating the development of medical breakthroughs. We do this by building and running the world's best data and AI infrastructure platform so our customers can use deep data insights to improve their business. Founded by engineers — and customer obsessed — we leap at every opportunity to solve technical challenges, from designing next-gen UI/UX for interfacing with data to scaling our services and infrastructure across millions of virtual machines. And we're only getting started. As one of the first Engineering Managers in the Software Engineering team at Databricks India , you will work with your team to build infrastructure and products for the Databricks platform at scale . We have multiple teams working on different domains. Resource management infrastructure powering the big data and machine learning workloads on the Databricks platform in a scalable, secure, and cloud-agnostic way Develop reliable, scalable services and client libraries that work with massive amounts of data on the cloud, across geographic regions and Cloud providers Build tools to allow Databricks engineers to operate their services across different clouds and environments Build services, products and infrastructure at the intersection of machine learning and distributed systems. The Impact You Will Have Hire great engineers to build an outstanding team. Support engineers in their career development by providing clear feedback and develop engineering leaders. Ensure high technical standards by instituting processes (architecture reviews, testing) and culture (engineering excellence). Work with engineering and product leadership to build a long-term roadmap. Coordinate execution and collaborate across teams to unblock cross-cutting projects. What We Look For 12+ years of extensive experience with large-scale distributed systems alongside the processes around testing, monitoring, SLAs etc Extensive experience as a Software Engineering Leader , building & scaling software engineering teams from ground up Extensive experience managing a team of strong software engineers Partner with PM, Sales, and Customers to develop innovative features & products. BS (or higher) in Computer Science, or a related field. About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less
Posted 1 month ago
8.0 - 18.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Greetings from TCS!! TCS is Hiring for Data Architect Interview Mode: Virtual Required Experience: 8-18 years Work location: PAN INDIA Data Architect Technical Architect with experience in designing data platforms, experience in one of the major platforms such as snowflake, data bricks, Azure ML, AWS data platforms etc., Hands on Experience in ADF, HDInsight, Azure SQL, Pyspark, python, MS Fabric, data mesh Good to have - Spark SQL, Spark Streaming, Kafka Hands on exp in Databricks on AWS, Apache Spark, AWS S3 (Data Lake), AWS Glue, AWS Redshift / Athena Good To Have - AWS Lambda, Python, AWS CI/CD, Kafka MLflow, TensorFlow, or PyTorch, Airflow, CloudWatch If interested kindly send your updated CV and below mentioned details through E-mail: srishti.g2@tcs.com Name: E-mail ID: Contact Number: Highest qualification: Preferred Location: Highest qualification university: Current organization: Total, years of experience: Relevant years of experience: Any gap: Mention-No: of months/years (career/ education): If any then reason for gap: Is it rebegin: Previous organization name: Current CTC: Expected CTC: Notice Period: Have you worked with TCS before (Permanent / Contract ) : Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
AI Engineer Location: Pune: Hybrid Experience: 5-8 years (NEED STRONG EXPERIENCE ON AGENTIC AI) Position Summary: As a member of the AI team, you will be driving Agentic AI and Generative AI integration across all business units. You will drive AI development and integration across the organization, directly impacting the company's global sustainability efforts and shaping how we leverage AI to serve Fortune 500 clients. Responsibilities and Duties: Strategic Leadership (10%): Champion the AI/ML roadmap, driving strategic planning and execution for all initiatives. Provide guidance on data science projects (Agentic AI, Generative AI, and Machine Learning), aligning them with business objectives and best practices. Foster a data-driven culture, advocating for AI-powered solutions to business challenges and efficiency improvements. Collaborate with product management, engineering, and business stakeholders to identify opportunities and deliver impactful solutions Technical Leadership (40% ): Architect and develop Proof-of-Concept (POC) solutions for Agentic AI, Generative AI, and ML. Utilize Python and relevant data science libraries, leveraging MLflow. Provide technical guidance on AI projects, ensuring alignment with business objectives and best practices. Assist in development and documentation of standards for ethical and regulatory-compliant AI usage. Stay current with AI advancements, contributing to the team's knowledge and expertise. Perform hands-on data wrangling and AI model development Operational Leadership (50%): Drive continuous improvement through Agentic AI, Generative AI, and predictive modeling. Participate in Agile development processes (Scrum and Kanban). Ensure compliance with regulatory and ethical AI standards. Required Abilities and Skills: Agentic AI development and deployment. Statistical modeling, machine learning algorithms, and data mining techniques. Databricks and MLflow for model training, deployment, and management on AWS. Working with large datasets on AWS and Databricks Strong hands-on experience with: Agentic AI development and deployment. Working with large datasets on AWS and Databricks. Desired Experience: Statistical modeling, machine learning algorithms, and data mining techniques. Databricks and MLflow for model training, deployment, and management on AWS. Experience integrating AI with IoT/event data. Experience with real-time and batch inference integration with SaaS applications. International team management experience. Track record of successful product launches in regulated environments Education and Experience: 5+ years of data science/AI experience (2 to 3 years exp preferred) Bachelor's degree in Statistics, Data Science, Computer Engineering, Mathematics, or a related field (Master's preferred). Proven track record of deploying successful Agentic AI, Generative AI, and ML projects from concept to production. Excellent communication skills, able to explain complex technical concepts to both technical and non-technical audiences. Thanks & Regards Bhawesh Kaushik E: Bhawesh.Kaushik@akkodisgroup.com C: 922 040 6619 LinkedIn: www.linkedin.com/in/bhawesh-b36532161/ Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Job Title: Senior Technical Manager – Full Stack (Python / React / Node / Cloud) Location: Jaipur Type: Full-time, Permanent Experience: 5+ years in software engineering (2+ years in leading multi-stack teams) Urgency: Immediate joiners preferred About the Role: We’re looking for a Senior Technical Manager to lead high-impact full-stack engineering projects across cloud-native and microservices architectures. You’ll manage cross-functional teams, guide system design, and drive delivery in a fast-paced environment. Key Responsibilities: Lead and mentor multi-stack engineering teams Architect, design, and deliver scalable web applications Oversee DevOps, CI/CD, and cloud infrastructure (AWS/GCP/Azure) Ensure high code quality through Agile practices and engineering excellence Drive incident management and SLI/SLO ownership Collaborate with cross-functional teams for seamless project delivery Must-have Skills: Strong experience with Python 3, Django & DRF Proficiency in React.js (v19+) and Node.js microservices Solid understanding of cloud platforms: AWS, Azure, or GCP Experience with Docker, Kubernetes, and Infrastructure as Code (Terraform) CI/CD using GitHub Actions / GitLab CI Agile / Scrum leadership and team management Good-to-have Skills: LLM / RAG integration, NLP, ML, CV MLOps (Kubeflow / MLflow) DevSecOps practices (SCA, SAST, DAST) Why Join Us? Work on cutting-edge technologies and emerging tech stacks Be part of a high-growth, innovation-driven environment Competitive compensation with performance-based incentives Send your application at alaqsha.qadeer@createbytes.com Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Title: Senior Databricks Architect Location : Hybrid ( Chennai, TN) Key Responsibilities: The role requires a strong focus on engaging with clients and showcasing thought leadership. The ideal candidate will be skilled in developing insightful content and presenting it effectively to diverse audiences Cloud Architecture & Design: • Understand customers’ overall data platform, business and IT priorities and success measures to design data solutions that drive business value. • Design scalable, high-performance, end-to-end data architectures, solutions and pipelines using Databricks Lakehouse architecture and related technologies that includes data ingestion, processing, storage and analytical capabilities. • Work with major cloud providers to integrate Databricks solutions seamlessly into customer’s enterprise environments. • Assess and validate non-functional attributes and build solutions that exhibit high levels of performance, security, scalability, maintainability, and reliability. • Optimize Databricks clusters and queries for efficiency, reliability, and cost-effectiveness Technical Leadership: • Guide technical teams in best practices on cloud adoption, migration and application modernization and provide thought leadership and insights into existing and emerging Databricks capabilities. • Ensure long term technical viability and optimization of cloud deployments by identifying and resolving bottlenecks proactively thereby ensuring high availability and efficient resource utilization. Stakeholder Collaboration: • Work closely with business leaders, developers, and operations teams to ensure alignment of technological solutions with business goals. • Work with prospective and existing customers to implement POCs/MVPs and guide through to deployment, operationalization, and troubleshooting. • Identify, communicate, and mitigate the assumptions, issues, and risks that occur throughout the project lifecycle. • Ability to judge and strike a balance between what is strategically logical and what can be accomplished realistically. Innovation and Continuous Improvement: • Stay updated with the latest advancements in Databricks and big data technologies and drive their adoption. • Implement innovative solutions to improve data processing, storage, and analytics efficiency. • Identify opportunities to enhance existing data engineering processes using AI and machine learning tools. Documentation: • Create comprehensive blueprints, architectural diagrams, technical collaterals, assets and implementation plans for Databricks solutions. Required Qualifications: Education: • Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. Experience: • Minimum of 8 years in architecture roles, including at least 3-5 years working with Databricks. Technical Expertise: • Strong proficiency in Spark, Python, Scala, SQL, and cloud platforms (Azure, AWS, GCP). • Strong proficiency in Databricks workflows, Lakehouse architecture, Delta Tables, and MLflow. • Proficiency in architectural best practices in cloud around user management, data privacy, data security, performance and other non-functional requirements. • Familiarity in building AI/ML models on cloud solutions using Databricks. Soft Skills: • Strong analytical, problem-solving and troubleshooting skills. • Excellent communication and ability to mentor and inspire teams. • Strong leadership abilities with experience managing cross-functional teams. Preferred Skills: • Databricks certification as a professional or architect. • Experience with BFSI or Healthcare or Retail domain. • Experience with hybrid cloud environments and multi-cloud strategies. • Experience with data governance principles, data privacy and security. • Experience with data visualization tools like Power BI or Tableau. Show more Show less
Posted 1 month ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
FEQ326R387 Databricks is looking for a motivated Sr. Strategy and Operations Manager to join our Field Engineering team that helps define Go-To-Market (GTM) strategy, provides strategic analyses and instills operational thoughtfulness to our successful & fast-growing GTM business. You will use data and qualitative information to help our Sales and Field Engineering Leadership manage the business. You will support essential aspects of our GTM design and annual planning process. You will help build our data and analytics foundation including executive reporting, health of business reviews, dashboards, and Indicators. You will work with our Field Engineering, Sales, Finance, Data, Marketing, Order Ops, and other GTM teams. You will report to the Sr. Director, Strategy and Ops. The Impact You Will Have Establish the GTM Strategy and build processes in place to ensure we have the right investments at the right time Be a trusted partner to the GTM Leadership by defining, tracking, and implementing goals, programs and strategies that scale Guide annual GTM planning process (FY and long-range modeling, investment Return on investment analysis, HC planning, capacity setting) Design and manage the headcount forecasting process (FY, long-range, and quarterly modeling) Lead executive analyses, strategies, and deliverables (e.g., board materials, QBR) Provide visibility and performance tracking to the business (Operational councils, Ops reviews, rep. efficiency and productivity Indicators, dashboards) Help build our data and reporting foundation Improve operational efficiency by automating and improving processes, and dashboards that scale What We Look For 7+ years of strategy & operations experience (Investment banking, strategy consulting, FP&A, business operations, Enterprise/Mid-Market SaaS experience) Ability and passion to analyze, set priorities, and solve complex problems Propensity to summarize complex concepts and data and present clear information to executives, teams, and internal customers Data expert: Querying and scoping (SQL, Databricks, BigQuery), analysis (Excel), summarizing (pivot tables, charts, slides, written explanation), reporting (dashboards, repositories) Process expert: Envision E2E process change to solve our needs, drive agreement, document requirements, guide execution in partnership with IT, perform UAT, and report on progress Proficient in BI and sales tools (eg: Salesforce, Tableau etc.) Passion for building and working in a high performance global team A fondness for customer service and patience About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone. Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
23962 Jobs | Dublin
Wipro
12595 Jobs | Bengaluru
EY
8867 Jobs | London
Accenture in India
7480 Jobs | Dublin 2
Uplers
7207 Jobs | Ahmedabad
Amazon
6884 Jobs | Seattle,WA
IBM
6543 Jobs | Armonk
Oracle
6473 Jobs | Redwood City
Muthoot FinCorp (MFL)
6161 Jobs | New Delhi
Capgemini
5121 Jobs | Paris,France