Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Selected Intern's Day-to-day Responsibilities Include Conducting experiments under defined conditions Working on technical content writing Operating of Scientific Analytical Instruments Laboratory Management. Perform QC on Lab Devices About Company: M19-Material Intelligence lab is perhaps the only startup in India developing indigenous scientific lab instruments for advanced material research. developing hardware and software to more precisely evaluate materials for airflow resistance, porosity, pore size, and water vapor transmission. They're crucial characteristics in testing any kind of material, with the far-ranging impact that will affect applications in air filtration, water purification, sound absorption, data acquisition, membrane technology, chemical manufacturing, biomechanical engineering, oil and natural gas, the food and beverage industry, and medical/technical textile development.
Posted 3 weeks ago
5.0 years
0 Lacs
Mehsana, Gujarat, India
On-site
Job Summary Using analytical and experimental techniques, lead the development of fans and airflow systems in terms of noise, strength analysis, thermal flow analysis and manufacturability. Responsibility Through numerical and experimental analysis, develop new types of fans (propeller, cross-flow, centrifugal, sirocco) for air conditioners that improve the fluid performance and reduce noise. To do thermal flow analysis with in system and around system to improve thermal efficiency of product at development stage and analyze thermal efficiency at customer site. Work with various stakeholders, including the members of platform design department, each module, and the production technology department, to develop a fan that maintains performance and noise levels without sacrificing strength or productivity. Propose and design prototypes and experimental equipment that will lead to the evaluation of subsystems, including fans and shrouds. Educational Qualification Master’s degree (or equivalent) in fluid mechanics and aerodynamics dealing with the flow around fans, turbomachinery design, rotating machinery, and CFD analysis. Working Experience at least 5 years of fan design, thermal analysis or research experience Skill Requirements Communication and presentation skills. Ability to make objective decisions in collaboration with managers to ensure that the right decisions are made. Ability to make judgments that enable correct responses to stakeholder comments. Ability to propose new approaches to problems Language Excellent communication skills. (Fluent English, both written and spoken is preferred.) Location: kadi
Posted 3 weeks ago
0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Scientific Instrumentation is a highly technical industry. As an electronics/electrical engineer, you'll design, develop and test components, devices, systems or equipment. You may be involved at any stage of a project including the initial brief for a concept, the design and development stage, testing of prototypes and the final manufacture and assembly of the electronics board, wiring of scientific devices, and quality checks also procurement of electronics and electrical components. You'll closely work with colleagues in other branches of engineering. Required skill sets (Must-have) Basic knowledge about working of electronics components like microcontrollers,IC's, resistors ,diodes etc Must know how to use a multimeter. Basic understanding of Voltage type transducers (pressure gauges etc) Must be familiar with basic electrical wiring terminology. Optional Hands on experience of Soldering through hole components. Diploma or B.E must. About Company: M19-Material Intelligence lab is perhaps the only startup in India developing indigenous scientific lab instruments for advanced material research. developing hardware and software to more precisely evaluate materials for airflow resistance, porosity, pore size, and water vapor transmission. They're crucial characteristics in testing any kind of material, with the far-ranging impact that will affect applications in air filtration, water purification, sound absorption, data acquisition, membrane technology, chemical manufacturing, biomechanical engineering, oil and natural gas, the food and beverage industry, and medical/technical textile development.
Posted 3 weeks ago
0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Selected Intern's Day-to-day Responsibilities Include Working hands-on with instrument assembly, tooling, product assembly & quality control Reading and understanding schematics, blueprints, and assembly instructions Co-ordinating with multiple vendors for design and fabrication jobs About Company: M19-Material Intelligence lab is perhaps the only startup in India developing indigenous scientific lab instruments for advanced material research. developing hardware and software to more precisely evaluate materials for airflow resistance, porosity, pore size, and water vapor transmission. They're crucial characteristics in testing any kind of material, with the far-ranging impact that will affect applications in air filtration, water purification, sound absorption, data acquisition, membrane technology, chemical manufacturing, biomechanical engineering, oil and natural gas, the food and beverage industry, and medical/technical textile development.
Posted 3 weeks ago
0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Selected Intern's Day-to-day Responsibilities Include Conducting experiments under defined conditions to verify/reject various types of hypotheses using refined scientific methods Working on technical content writing About Company: M19-Material Intelligence lab is perhaps the only startup in India developing indigenous scientific lab instruments for advanced material research. developing hardware and software to more precisely evaluate materials for airflow resistance, porosity, pore size, and water vapor transmission. They're crucial characteristics in testing any kind of material, with the far-ranging impact that will affect applications in air filtration, water purification, sound absorption, data acquisition, membrane technology, chemical manufacturing, biomechanical engineering, oil and natural gas, the food and beverage industry, and medical/technical textile development.
Posted 3 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
You are a technical and hands-on Lead Data Engineer with over 8 years of experience, responsible for driving the modernization of data transformation workflows within the organization. Your primary focus will be on migrating legacy SQL-based ETL logic to DBT-based transformations and designing a scalable, modular DBT architecture. You will also be tasked with auditing and refactoring legacy SQL code for clarity, efficiency, and modularity. In this role, you will lead the improvement of CI/CD pipelines for DBT, including automated testing, deployment, and code quality enforcement. Collaboration with data analysts, platform engineers, and business stakeholders is essential to understand current gaps and define future data pipelines. Additionally, you will own Airflow orchestration redesign where necessary and define coding standards, review processes, and documentation practices. As a Lead Data Engineer, you will coach junior data engineers on DBT and SQL best practices and provide lineage and impact analysis improvements using DBT's built-in tools and metadata. Key qualifications for this role include proven experience in migrating legacy SQL to DBT, a deep understanding of DBT best practices, proficiency in SQL performance tuning and query optimization, and hands-on experience with modern data stacks such as Snowflake or BigQuery. Strong communication and leadership skills are essential for this role, as you will be required to work cross-functionally and collaborate with various teams within the organization. Exposure to Python, data governance and lineage tools, and mentoring experience are considered nice-to-have qualifications for this position. If you are passionate about modernizing data transformation workflows and driving technical excellence within the organization, this role is the perfect fit for you.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
Join us in building a modern workflow management system using Python (FastAPI), React/Next.js, GraphQL, and SQL Server on Azure. You'll work across a well-architected backend and a clean, Jamstack-style frontend with a strong focus on quality, performance, and automation. You will be responsible for building REST and GraphQL APIs with FastAPI, developing modern, responsive UIs using React + Next.js, orchestrating workflows using Celery/RQ and Prefect/Airflow, integrating SQLAlchemy ORM with MS SQL Server, and contributing to testing (pytest, mocking, coverage), CI/CD (GitHub Actions, Docker), and documentation. Additionally, you will collaborate with onshore teams and provide production support when needed. We are looking for individuals with at least 5 years of Python development experience and 3 years of React/Next.js experience. You should be strong in async programming & API design, comfortable with GraphQL, SQL, and workflow engines, have experience with testing, CI/CD, and code quality tools, and possess excellent communication skills. Flexibility to support shifts if needed is also required. If you are ready to build something impactful and meet the above qualifications, let's connect! Feel free to drop your profile or send a direct message if you're interested. Referrals are welcome too.,
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
You will be joining the Analytics Engineering team at DAZN, where your primary responsibility will be transforming raw data into valuable insights that drive decision-making across various aspects of our global business. This includes content, product development, marketing strategies, and revenue generation. Your role will involve constructing dependable and scalable data pipelines and models to ensure that data is easily accessible and actionable for all stakeholders. As an Analytics Engineer with a minimum of 2 years of experience, you will play a crucial part in the construction and maintenance of our advanced data platform. Utilizing tools such as dbt, Snowflake, and Airflow, you will be tasked with creating well-organized, well-documented, and reliable datasets. This hands-on position is perfect for individuals aiming to enhance their technical expertise while contributing significantly to our high-impact analytics operations. Your key responsibilities will involve: - Developing and managing scalable data models through the use of dbt and Snowflake - Creating and coordinating data pipelines using Airflow or similar tools - Collaborating with various teams within DAZN to transform business requirements into robust datasets - Ensuring data quality through rigorous testing, validation, and monitoring procedures - Adhering to best practices in code versioning, CI/CD processes, and data documentation - Contributing to the enhancement of our data architecture and team standards We are seeking individuals with: - A minimum of 2 years of experience in analytics/data engineering or related fields - Proficiency in SQL and a solid understanding of cloud data warehouses (preference for Snowflake) - Familiarity with dbt for data modeling and transformation - Knowledge of Airflow or other workflow orchestration tools - Understanding of ELT processes, data modeling techniques, and data governance principles - Strong communication and collaboration skills Nice to have: - Previous experience in media, OTT, or sports technology sectors - Familiarity with BI tools such as Looker, Tableau, or Power BI - Exposure to testing frameworks like dbt tests or Great Expectations,
Posted 3 weeks ago
5.0 - 31.0 years
4 - 7 Lacs
Work From Home
Remote
We are seeking a highly skilled and passionate Software Developer with expertise in Python programming and a strong background in automation and AI technologies. The ideal candidate will have experience working on projects involving Large Language Models (LLMs), including fine-tuning, optimization, and integration. This role requires someone who can write clean, efficient code, automate processes, and contribute to the development of AI-driven solutions. Key Responsibilities Design, develop, and maintain robust Python applications and scripts for various business use cases. Build and enhance automation frameworks, tools, and workflows to improve efficiency. Work with LLMs (Large Language Models), including fine-tuning, prompt engineering, and model deployment. Integrate AI models into applications and services using APIs or custom pipelines. Collaborate with cross-functional teams (data scientists, ML engineers, product managers) to deliver end-to-end AI solutions. Optimize code for performance, scalability, and maintainability. Troubleshoot, debug, and enhance existing systems and automations. Stay updated with the latest advancements in AI/ML, LLMs, and Python ecosystems. Required Skills & Qualifications 3+ years of professional experience as a Python Developer or Software Engineer. Proven experience in writing clean, efficient, and maintainable Python code. Hands-on experience building automation workflows (e.g., using Python scripts, Airflow, or other automation tools). Working knowledge of AI/ML concepts, specifically Large Language Models (e.g., OpenAI, Hugging Face, LLaMA, GPT models). Experience in fine-tuning LLMs or training custom models using frameworks such as PyTorch ,TensorFlow , LangChain Familiarity with APIs, RESTful services, and AI integrations. Strong problem-solving skills and ability to work in a collaborative team environment. Good understanding of version control systems (Git) and CI/CD practices. Preferred Qualifications Experience with cloud platforms (AWS, GCP, Azure) for AI/ML model deployment. Familiarity with Docker or Kubernetes for containerized deployments. Knowledge of data pipelines and ETL processes. Experience with prompt engineering and building LLM-powered chatbots or applications. Exposure to LangChain or other LLM orchestration frameworks. Soft Skills Strong analytical and debugging abilities. Excellent communication and teamwork skills. Ability to work on multiple projects with tight deadlines. Passion for learning emerging technologies and applying them to real-world problems.
Posted 3 weeks ago
5.0 - 13.0 years
0 Lacs
karnataka
On-site
Dexcom Corporation is a pioneer and global leader in continuous glucose monitoring (CGM), with a vision to forever change how diabetes is managed and to provide actionable insights for better health outcomes. With a history of 25 years in the industry, Dexcom is broadening its focus beyond diabetes to empower individuals to take control of their health. The company is dedicated to developing solutions for serious health conditions and aims to become a leading consumer health technology company. The software quality team at Dexcom is collaborative and innovative, focusing on ensuring the reliability and performance of CGM systems. The team's mission is to build quality into every stage of the development lifecycle through smart automation, rigorous testing, and a passion for improving lives. They are seeking individuals who are eager to grow their skills while contributing to life-changing technology. As a member of the team, your responsibilities will include participating in building quality into products by writing automated tests, contributing to software requirements and design specifications, designing, developing, executing, and maintaining automated and manual test scripts, creating verification and validation test plans, traceability matrices, and test reports, as well as recording and tracking issues using the bug tracking system. You will also analyze test failures, collaborate with development teams to investigate root causes, and contribute to the continuous improvement of the release process. To be successful in this role, you should have 13 years of hands-on experience in software development or software test development using Python or other object-oriented programming languages. Experience with SQL and NoSQL databases, automated test development for API testing, automated testing frameworks like Robot Framework, API testing, microservices, distributed systems in cloud environments, automated UI testing, cloud platforms like Google Cloud or AWS, containerization tools such as Docker and Kubernetes, and familiarity with FDA design control processes in the medical device industry are desired qualifications. Additionally, knowledge of GCP tools like Airflow, Dataflow, and BigQuery, distributed event streaming platforms like Kafka, performance testing, CI/CD experience, and Agile development and test development experience are valued. Effective collaboration across functions, self-starting abilities, and clear communication skills are also essential for success in this role. Please note that Dexcom does not accept unsolicited resumes or applications from agencies. Staffing and recruiting agencies must be authorized to submit profiles, applications, or resumes on specific requisitions. Dexcom is not responsible for any fees related to unsolicited resumes or applications.,
Posted 3 weeks ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Notice 30 days to immediate Experience Required: 8+ years in data engineering and software development Job Description: We are seeking a Lead Data Engineer with strong expertise in Python, PySpark, Airflow (Batch Jobs), HPCC, and ECL to drive complex data solutions across multi-functional teams. The ideal candidate will have hands-on experience with data modeling, test-driven development, and Agile/Waterfall methodologies. You’ll lead initiatives, collaborate across teams, and translate business needs into scalable data solutions using best practices in managed services or staff augmentation environments.
Posted 3 weeks ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description We are seeking a Senior GCP Software Engineer with Data Engineering knowledge to be part of the Data products Development team. This individual contributor role is pivotal in delivering Data products at speed for our Analytical business customers. The role involves handling real time repair order data, designing and modeling solutions, and developing solutions for real world business problems. Expectations are that the candidate will be able to interface with business customers, management, and is knowledgeable about the GCP platform. Looking for a candidate that is self-motivated to expand their role, proactively address challenges that arise, share knowledge with the team, and is not afraid to think outside the box to identify solutions. Your skills shall be utilized to analyse and transform large datasets, support Analytics in GCP, and ensure development process and standards are met to sustain the growing infrastructure and business needs. Responsibilities Work on a small agile team to deliver curated data products for the Product Organization. Work effectively with fellow data engineers, product owners, data champions and other technical experts. Demonstrate technical knowledge and communication skills with the ability to advocate for well-designed solutions. Develop exceptional analytical data products using both streaming and batch ingestion patterns on Google Cloud Platform with solid data warehouse principles. Be the Subject Matter Expert in Data Engineering with a focus on GCP native services and other well integrated third-party technologies. Architect and implement sophisticated ETL pipelines, ensuring efficient data integration into Big Query from diverse batch and streaming sources. Spearhead the development and maintenance of data ingestion and analytics pipelines using cutting-edge tools and technologies, including Python, SQL, and DBT/Data form. Ensure the highest standards of data quality and integrity across all data processes. Data workflow management using Astronomer and Terraform for cloud infrastructure, promoting best practices in Infrastructure as Code Rich experience in Application Support in GCP. Experienced in data mapping, impact analysis, root cause analysis, and document data lineage to support robust data governance. Develop comprehensive documentation for data engineering processes, promoting knowledge sharing and system maintainability. Utilize GCP monitoring tools to proactively address performance issues and ensure system resilience, while providing expert production support. Provide strategic guidance and mentorship to team members on data transformation initiatives, championing data utility within the enterprise. Ability to model data products to implement standardization and optimization of data products from inception Qualifications Experience working in API services (Kafka topics) and GCP native (or equivalent) services like Big Query, Google Cloud Storage, PubSub, Dataflow, Dataproc etc. Experience working with Airflow for scheduling and orchestration of data pipelines. Experience working with Terraform to provision Infrastructure as Code. 2 + years professional development experience in Java or Python. Bachelor’s degree in computer science or related scientific field. Experience in analysing complex data, organizing raw data, and integrating massive datasets from multiple data sources to build analytical domains and reusable data products. Experience in working with architects to evaluate and productionalize data pipelines for data ingestion, curation, and consumption. Experience in working with stakeholders to formulate business problems as technical data requirements, identify and implement technical solutions while ensuring key business drivers are captured in collaboration with product management.
Posted 3 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers have more impact than others. If you’re looking for a career where you can make a real impression, join HSBC and discover how valued you’ll be. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of a Senior Software Engineer In this role, you will Understand project requirements and develop design specifications as per business agreements Design, code, and maintain the Oracle systems based on established standards. Perform initial design reviews and recommend tactical as well as strategic improvements based on programme requirements. Write clear codes and prepare coding documentations as per programme requirements. Under DevOps initiative carry out in system integration and acceptance testing and perform bug fixes for production readiness. Analyse and troubleshoot production issues in a timely manner followed up with Root cause analysis documentation. Work independently with minimal supervision, closely collaborating with other solution architects, business analysts and project managers. Follow all relevant IT policies, processes and standard operating procedures so that work is carried out in a controlled and consistent manner Provide out of hours support to the production batch with a focus on performance tuning Delivering valuable working software to the business with a constant focus on technical process improvement Working in PODs to help deliver Stories / Tasks and also involved in providing the technical Support in Production tickets / Incidents on rota basis. Requirements To be successful in this role, you should meet the following requirements: Excellent communication skills (written or oral). Fluent written and spoken English is essential in order to communicate with the other teams / entities of the group (mainly teams based in Paris, New York, London and in Asia). Good exposure of core java, collections framework and OOPS Work experience on Spring boot framework and Junits. Data Analytics skills (as will be dealing with processing of large data sets) Good capabilities to work in a team split in different locations. Being responsive is essential, especially regarding the daily support of the application Being autonomous: knowing how to take responsibilities for actions to be undertaken and bring them to completion. Being “customer oriented”: knowing how to understand and interpret user needs. Being flexible. Additional Skills Knowledge of batch processing systems/tools (Choreographer, Airflow etc) Working knowledge of SQL and queries Exposure to Apache Spark or any big data framework Fair understanding of DevOps concepts You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 3 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role Description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures Of Outcomes Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration Define and govern the configuration management plan. Ensure compliance within the team. Testing Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management Manage the delivery of modules effectively. Defect Management Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation Create and provide input for effort and size estimation for projects. Knowledge Management Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management Execute and monitor the release process to ensure smooth transitions. Design Contribution Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments Skills Cloud Platforms ( AWS, MS Azure, GC etc.) Containerization and Orchestration ( Docker, Kubernetes etc..) APIs - Change APIs to APIs development Data Pipeline construction using languages like Python, PySpark, and SQL Data Streaming (Kafka and Azure Event Hub etc..) Data Parsing ( Akka and MinIO etc..) Database Management ( SQL and NoSQL, including Clickhouse, PostgreSQL etc..) Agile Methodology ( Git, Jenkins, or Azure DevOps etc..) JS like Connectors/ framework for frontend/backend Collaboration and Communication Skills Aws Cloud,Azure Cloud,Docker,Kubernetes
Posted 3 weeks ago
15.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a highly skilled and passionate Senior Solution Architect – Generative AI to join our team and lead the design and implementation of cutting-edge AI solutions. This role is ideal for a seasoned professional with extensive experience in AI/ML development and architecture, a deep understanding of generative AI technologies, and a strategic mindset to align innovations with business requirements. Responsibilities Develop and oversee architectural designs for generative AI models, frameworks, and solutions tailored to business needs Design scalable pipelines for integrating and deploying generative AI solutions in alignment with enterprise architecture Perform in-depth research to stay current on advancements in generative AI, including GPT, DALL-E, and Stable Diffusion to evaluate their applicability Collaborate with stakeholders to assess business requirements and translate them into concrete AI strategies and actionable implementation plans Lead the end-to-end development, testing, and deployment of generative AI systems, acting as a technical guide for teams Advocate for the adoption of best practices, tools, and frameworks to enhance enterprise AI capabilities Create APIs and services to integrate generative AI tools within business workflows or customer-facing platforms Work closely with data engineers and scientists to ensure high-quality data preprocessing for AI model training Implement responsible AI protocols to address governance, ethical usage, and regulatory compliance Identify and mitigate data biases, ensuring data privacy and security concerns are addressed Act as a mentor for junior AI developers and other cross-functional team members in understanding generative AI technologies Facilitate cross-disciplinary collaboration with data scientists, engineers, product managers, and business stakeholders to drive project success Benchmark emerging generative AI tools like OpenAI models, Hugging Face, and custom-built frameworks for potential integration and improvements Conduct periodic evaluations of deployed AI systems, recommending adjustments and enhancements for improved operational efficiency Requirements 15-23 years of overall IT experience with at least 5+ years of proven experience in AI/ML development or architecture roles Background in designing and implementing generative AI solutions, including areas like NLP, computer vision, or code generation Familiarity with foundational models such as GPT, BERT, and their customization for enterprise use cases Knowledge of AI/ML frameworks/tools such as TensorFlow, PyTorch, or Hugging Face models Skills in cloud platforms (AWS, Azure, Google Cloud) and container platforms like Docker or Kubernetes Proficiency in Python, R, or similar programming languages for building generative AI solutions Understanding of MLOps principles, pipeline orchestration tools (Kubeflow, Airflow), and CI/CD practices Expertise in responsible AI governance, ethical frameworks, and compliance with data privacy regulations Capability to collaborate with multidisciplinary teams to align AI strategies with organizational goals
Posted 3 weeks ago
8.0 - 13.0 years
7 - 11 Lacs
Pune
Work from Office
Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT JOB SUMMARY: Position Sr Consultant Location Capco Locations (Bengaluru/ Chennai/ Hyderabad/ Pune/ Mumbai/ Gurugram) Band M3/M4 (8 to 14 years) Role Description: Job TitleSenior Consultant - Data Engineer Responsibilities Design, build and optimise data pipelines and ETL processes in Azure Databricks ensuring high performance, reliability, and scalability. Implement best practices for data ingestion, transformation, and cleansing to ensure data quality and integrity. Work within clients best practice guidelines as set out by the Data Engineering Lead Work with data modellers and testers to ensure pipelines are implemented correctly. Collaborate as part of a cross-functional team to understand business requirements and translate them into technical solutions. Role Requirements Strong Data Engineer with experience in Financial Services Knowledge of and experience building data pipelines in Azure Databricks Demonstrate a continual desire to implement strategic or optimal solutions and where possible, avoid workarounds or short term tactical solutions Work within an Agile team Experience/Skillset 8+ years experience in data engineering Good skills in SQL, Python and PySpark Good knowledge of Azure Databricks (understanding of delta tables, Apache Spark, Unity Catalog) Experience writing, optimizing, and analyzing SQL and PySpark code, with a robust capability to interpret complex data requirements and architect solutions Good knowledge of SDLC Familiar with Agile/Scrum ways of working Strong verbal and written communication skills Ability to manage multiple priorities and deliver to tight deadlines WHY JOIN CAPCO You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients #LI-Hybrid
Posted 3 weeks ago
5.0 - 8.0 years
9 - 14 Lacs
Bengaluru
Work from Office
: The Senior Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and building out new API integrations to support continuing increases in data volume and complexity. They will collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Responsibilities: Design, construct, install, test and maintain highly scalable data management systems & Data Pipeline. Ensure systems meet business requirements and industry practices. Build high-performance algorithms, prototypes, predictive models, and proof of concepts. Research opportunities for data acquisition and new uses for existing data. Develop data set processes for data modeling, mining and production. Integrate new data management technologies and software engineering tools into existing structures. Create custom software components and analytics applications. Install and update disaster recovery procedures. Collaborate with data architects, modelers, and IT team members on project goals. Provide senior level technical consulting to peer data engineers during data application design and development for highly complex and critical data projects. Qualifications: Bachelor's degree in computer science, Engineering, or related field, or equivalent work experience. Proven 5-8 years of experienceas a Senior Data Engineer or similar role. Experience with big data toolsHadoop, Spark, Kafka, Ansible, chef, Terraform, Airflow, and Protobuf RPC etc. Expert level SQL skills for data manipulation (DML) and validation (DB2). Experience with data pipeline and workflow management tools. Experience with object-oriented/object function scripting languagesPython, Java, Go langetc. Strong problem solving and analytical skills. Excellent verbal communication skills. Good interpersonal skills. Ability to provide technical leadership for the team.
Posted 3 weeks ago
5.0 - 8.0 years
9 - 14 Lacs
Pune
Work from Office
Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job TitleBig Data Engineer : The Senior Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and building out new API integrations to support continuing increases in data volume and complexity. They will collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Responsibilities: Design, construct, install, test and maintain highly scalable data management systems & Data Pipeline. Ensure systems meet business requirements and industry practices. Build high-performance algorithms, prototypes, predictive models, and proof of concepts. Research opportunities for data acquisition and new uses for existing data. Develop data set processes for data modeling, mining and production. Integrate new data management technologies and software engineering tools into existing structures. Create custom software components and analytics applications. Install and update disaster recovery procedures. Collaborate with data architects, modelers, and IT team members on project goals. Provide senior level technical consulting to peer data engineers during data application design and development for highly complex and critical data projects. Qualifications: Bachelor's degree in Computer Science, Engineering, or related field, or equivalent work experience. Proven 5-8 years of experienceas a Senior Data Engineer or similar role. Experience with big data toolsPyspark, Hadoop, Spark, Kafka, Ansible, chef, Terraform, Airflow, and Protobuf RPC etc.. Expert level SQL skills for data manipulation (DML) and validation (DB2). Experience with data pipeline and workflow management tools. Experience with object-oriented/object function scripting languagesPython, Java, Go langetc. Strong problem solving and analytical skills. Excellent verbal communication skills. Good interpersonal skills. Ability to provide technical leadership for the team.
Posted 3 weeks ago
4.0 - 8.0 years
10 - 14 Lacs
Pune
Work from Office
Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job TitleBig Data Engineer - Scala : Preferred Skills: ===- Strong skills in - Messaging Technologies like Apache Kafka or equivalent, Programming skill Scala, Spark with optimization techniques, Python Should able to write the query through Jupyter Notebook Orchestration tool like NiFi, AirFlow Design and implement intuitive, responsive UIs that allow issuers to better understand data and analytics Experience with SQL & Distributed Systems. Strong understanding of Cloud architecture. Ensure a high-quality code base by writing and reviewing performance, well-tested code Demonstrated experience building complex products. Knowledge of Splunk or other alerting and monitoring solutions. Fluent in the use of Git, Jenkins. Broad understanding of Software Engineering Concepts and Methodologies is required.
Posted 3 weeks ago
2.0 - 5.0 years
13 - 17 Lacs
Bengaluru
Work from Office
A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and Al journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. In your role, you will be responsible for Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies. End to End functional knowledge of the data pipeline/transformation implementation that the candidate has done, should understand the purpose/KPIs for which data transformation was done Preferred technical and professional experience Experience with AEM Core Technologies OSGI Services, Apache Sling ,Granite Framework., Java Content Repository API, Java 8+, Localization Familiarity with building tools, Jenkin and Maven , Knowledge of version control tools, especially Git, Knowledge of Patterns and Good Practices to design and develop quality and clean code, Knowledge of HTML, CSS, and JavaScript , jQuery Familiarity with task management, bug tracking, and collaboration tools like JIRA and Confluence
Posted 3 weeks ago
2.0 - 6.0 years
12 - 16 Lacs
Kochi
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in Big Data Technology like Hadoop, Apache Spark, Hive. Practical experience in Core Java (1.8 preferred) /Python/Scala. Having experience in AWS cloud services including S3, Redshift, EMR etc,Strong expertise in RDBMS and SQL. Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
India
On-site
Job Title: Java Developer – Java, Flink/Kafka, SQL, AWS Location: Bangalore, Mumbai Experience: 3 to 7 Years Mode: Hybrid Job Description: We are looking for a skilled Java Developer with strong experience in Java , Flink / Kafka , SQL , and AWS to join our high-performing engineering team. You will be responsible for designing and maintaining scalable data processing systems that power real-time and batch analytics across various platforms. Key Responsibilities: Design, build, and maintain scalable data pipelines and ETL workflows Develop real-time data processing applications using Apache Flink or Apache Kafka Optimize data flow and storage for performance, scalability, and reliability Collaborate with data scientists, analysts, and cross-functional teams to implement data-driven solutions Ensure data quality, lineage, and governance standards across all systems Automate and improve existing data processes using orchestration tools Develop and maintain comprehensive technical documentation Evaluate and integrate new technologies and tools for better data management and processing Mandatory Skills: Strong programming experience with Java Hands-on experience with Apache Flink (for Combination 1) or Apache Kafka (for Combination 2) Solid understanding of SQL and experience with relational databases (PostgreSQL, MySQL, Oracle, etc.) Experience in developing real-time streaming and batch processing pipelines Working knowledge of AWS services such as S3, EMR, Kinesis, Lambda, Glue, etc. Familiarity with CI/CD tools and version control systems like Git Good problem-solving skills and ability to work in an agile team environment Preferred Skills: Experience with Big Data technologies like Spark, Hadoop Exposure to workflow management tools like Airflow, Luigi, or NiFi Knowledge of Docker and Kubernetes Understanding of data security , governance , and metadata management Basic knowledge of machine learning workflows or MLOps Familiarity with NoSQL databases like MongoDB, Cassandra Knowledge of infrastructure-as-code tools like Terraform or CloudFormation Required Skill Combinations (choose one): Combination 1 : Java, Flink, SQL, AWS Combination 2 : Java, Kafka, SQL, AWS
Posted 3 weeks ago
5.0 - 10.0 years
20 - 35 Lacs
Hyderabad, Pune
Work from Office
We are seeking a Sr. Data Engineer to join our Data Engineering team within our Enterprise Data Insights organization to build data solutions, design and implement ETL/ELT processes and manage our data platform to enable our cross functional stakeholders. As a part of our Corporate Engineering division, our vision is to spearhead technology and data-led solutions and experiences to drive growth & innovation at scale. The ideal candidate will have a strong Data Engineering background, advanced Python knowledge and experience with cloud services and SQL/NoSQL databases. You will work closely with our cross functional stakeholders in Product, Finance and GTM along with Business and Enterprise Technology teams. As a Senior Data Engineer, you will: Collaborating closely with various stakeholders to prioritize requests, identify improvements, and offer recommendations. Taking the lead in analyzing, designing, and implementing data solutions, which involves constructing and designing data models and ETL processes. Cultivating collaboration with corporate engineering, product teams, and other engineering groups. Leading and mentoring engineering discussions, advocating for best practices. Actively participating in design and code reviews. Accessing and exploring third-party data APIs to determine the data required to meet business needs. Ensuring data quality and integrity across different sources and systems. Managing data pipelines for both analytics and operational purposes. Continuously enhancing processes and policies to improve SLA and SOX compliance. You'll be a great addition to the team if you have: Hold a B.S., M.S., or Ph.D. in Computer Science or a related technical field. Possess over 5 years of experience in Data Engineering, focusing on building and maintaining data environments. Demonstrate at least 5 years of experience in designing and constructing ETL/ELT processes, managing data solutions within an SLA-driven environment. Exhibit a strong background in developing data products, APIs, and maintaining testing, monitoring, isolation, and SLA processes. Possess advanced knowledge of SQL/NoSQL databases (such as Snowflake, Redshift, MongoDB). Proficient in programming with Python or other scripting languages. Have familiarity with columnar OLAP databases and data modeling. Experience in building ELT/ETL processes using tools like dbt, AirFlow, Fivetran, CI/CD using GitHub, and reporting in Tableau. Possess excellent communication and interpersonal skills to effectively collaborate with various business stakeholders and translate requirements. Added bonus if you also have: A good understanding of Salesforce & Netsuite systems Experience in SAAS environments Designed and deployed ML models Experience with events and streaming data
Posted 3 weeks ago
65.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job descriptions may display in multiple languages based on your language selection. What We Offer At Magna, you can expect an engaging and dynamic environment where you can help to develop industry-leading automotive technologies. We invest in our employees, providing them with the support and resources they need to succeed. As a member of our global team, you can expect exciting, varied responsibilities as well as a wide range of development prospects. Because we believe that your career path should be as unique as you are. Group Summary Magna is more than one of the world’s largest suppliers in the automotive space. We are a mobility technology company built to innovate, with a global, entrepreneurial-minded team. With 65+ years of expertise, our ecosystem of interconnected products combined with our complete vehicle expertise uniquely positions us to advance mobility in an expanded transportation landscape. Job Responsibilities Magna New Mobility is seeking a Data Engineer to join our Software Platform team. As a Backend Developer with cloud experience, you will be responsible for designing, developing, and maintaining the server-side components of our applications. You will work closely with cross-functional teams to ensure our systems are scalable, reliable, and secure. Your expertise in cloud platforms will be crucial in optimizing our infrastructure and deploying solutions that leverage cloud-native features. Your Responsibilities Design & Development: Develop robust, scalable, and high-performance backend systems and APIs. Design and implement server-side logic and integrate with front-end components. Database Knowledge: Strong experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases, especially MongoDB. Proficient in SQL and handling medium to large-scale datasets using big data platforms like Databricks. Familiarity with Change Data Capture (CDC) concepts, and hands-on experience with modern data streaming and integration tools such as Debezium and Apache Kafka. Cloud Integration: Leverage cloud platforms (e.g., AWS, Azure, Google Cloud) to deploy, manage, and scale applications. Implement cloud-based solutions for storage, computing, and networking. Security: Implement and maintain security best practices, including authentication, authorization, and data protection. Performance Optimization: Identify and resolve performance bottlenecks. Monitor application performance and implement improvements as needed. Collaboration: Work with product managers, front-end developers, and other stakeholders to understand requirements and deliver solutions. Participate in code reviews and contribute to team knowledge sharing. Troubleshooting: Diagnose and resolve issues related to backend systems and cloud infrastructure. Provide support for production environments and ensure high availability Who We Are Looking For Bachelor's Degree or Equivalent Experience in Computer Science or a relevant technical field Experience with Microservices: Knowledge and experience with microservices architecture. 3+ years of experience in backend development with a strong focus on cloud technologies. Technical Skills: Proficiency in backend programming languages such as Go lang, Python, Node.js, C/C++ or Java. Experience with any cloud platforms (AWS, Azure, Google Cloud) and related services (e.g., EC2, Lambda, S3, CloudFormation). Experience in building scalable ETL pipelines on industry standard ETL orchestration tools (Airflow, Dagster, Luigi, Google Cloud Composer, etc.) with deep expertise in SQL, PySpark, or Scala. Database Knowledge: Experience with relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB). Expertise in SQL and using big data technologies (e.g. Hive, Presto, Spark, Iceberg, Flink, Databricks etc) on medium to large-scale data. DevOps: Familiarity with CI/CD pipelines, infrastructure as code (IaC), containerization (Docker), and orchestration tools (Kubernetes). Awareness, Unity, Empowerment At Magna, we believe that a diverse workforce is critical to our success. That’s why we are proud to be an equal opportunity employer. We hire on the basis of experience and qualifications, and in consideration of job requirements, regardless of, in particular, color, ancestry, religion, gender, origin, sexual orientation, age, citizenship, marital status, disability or gender identity. Magna takes the privacy of your personal information seriously. We discourage you from sending applications via email or traditional mail to comply with GDPR requirements and your local Data Privacy Law. Worker Type Regular / Permanent Group Magna Corporate
Posted 3 weeks ago
10.0 - 20.0 years
20 - 35 Lacs
Bengaluru
Work from Office
We are looking for a Technical Architect who has experience designing software solutions from the ground up, making high-level decisions about each stage of the process and leading a team of engineers to create the final product. To be successful as a Technical Architect, you should be an expert problem solver with a strong understanding of the broad range of software technologies and platforms available. Top candidates will also be excellent leaders and communicators. Responsibilities Collaborate with stakeholders (Product Owners, Engineers, Clients) to define and refine software architecture aligned with product vision. Design scalable, modular, and secure system architecture using microservices and modern design patterns. Create high-level product specifications, architecture diagrams, and technical documentation. Provide architectural blueprints, guidance, and technical leadership to development teams. Build and integrate AI-driven components, including Agentic AI (AutoGen, LangGraph), RAG pipelines, and LLM capabilities (Open-source and Commercial models). Define vector-based data pipelines and integrate vector databases such as FAISS, ChromaDB, or Azure AI Search. Lead the implementation of Prompt Engineering strategies to optimize LLM outcomes. Architect and deploy scalable, secure solutions on Azure Cloud, using services like Azure Kubernetes Service (AKS), Azure Functions, Azure Storage, and Azure DevOps. Ensure robust CI/CD and DevSecOps practices in place to streamline deployments and enforce compliance. Guide the team in troubleshooting, code reviews, performance tuning, and ensuring adherence to software engineering best practices. Conduct regular technical reviews, present progress updates, and ensure timely delivery of milestones. Drive innovation and exploration of emerging technologies, particularly in the field of Generative AI and Intelligent Automation. Requirements 10+ years of hands-on software development experience, with a strong Python background. 3+ years of experience in a Software or Solution Architect role. Proven experience in designing and building production-grade solutions using microservices architecture. Strong expertise in Agentic AI frameworks (AutoGen, LangGraph, CrewAI, LangChain, etc.). Working knowledge of LLMs (Open-source like LLaMA, Mistral; Commercial like OpenAI, Azure OpenAI). Solid experience with Prompt Engineering, RAG pipelines, and Vector Database integration. Experience deploying AI/ML solutions on Azure Cloud, leveraging PaaS components and cloud-native patterns. Proficiency with containerization and orchestration tools (Docker, Kubernetes). Knowledge of architectural patterns: event-driven, domain-driven design, CQRS, service mesh, etc. Familiarity with NLP tools, frameworks, and libraries (spaCy, Hugging Face, Transformers, etc.). Experience with Apache Airflow or other workflow orchestration tools. Strong communication and leadership skills to work cross-functionally with technical and non-technical teams.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |