Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
30.0 years
0 Lacs
Durgapur, West Bengal, India
On-site
Company Overview Pinnacle Infotech values inclusive growth in an agile, diverse environment. With 30+ years of global experience, 3,400+ experts completed 15,000+ projects across 43+ countries for 5,000+ clients. Join us for rapid advancement, cutting-edge training, and impactful global projects. Embrace E.A.R.T.H. values, celebrate uniqueness, and drive swift career growth with Pinnaclites! Job Title: MLOps Engineer Job Summary: As an MLOps Engineer, you will be responsible for building, deploying, and maintaining the infrastructure required for machine learning models and ETL data pipelines. You will work closely with data scientists, and software developers to streamline our machine learning operations, manage data workflows, and ensure that the ML solutions are scalable, reliable, and secure. Location- Durgapur/Jaipur/Madurai Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 3+ years of experience in MLOps, data engineering, or a similar role. Proficiency in programming languages such as Python, Spark, and SQL. Experience with ML model deployment frameworks and tools (e.g., MLflow). Hands-on experience with cloud platforms (AWS, Azure, GCP) and infrastructure management. Familiarity with containerization and orchestration tools (Docker, Kubernetes). Understanding of DevOps practices, CI/CD pipelines, and monitoring tools. Excellent problem-solving skills and the ability to work independently and as part of a team. Key Responsibilities: Data Engineering and Pipeline Management: Design, develop, optimize, and maintain ETL processes and data pipelines to collect, process, and store data from multiple sources. Ensure data quality, integrity, and consistency across various databases. Collaborate with data scientists to make data available in the right format and speed for machine learning. Implement and manage data security and privacy protocols as per industry standards. ML Operations and Deployment: Design, build, and optimize scalable and reliable ML deployment pipelines. Develop CI/CD pipelines for automated ML model training, testing, and deployment. Implement and manage containerization (Docker) and orchestration tools (e.g., Kubernetes) for ML workflows. Monitor and troubleshoot model performance and infrastructure, ensuring smooth operation in production environments. Infrastructure Management: Manage cloud infrastructure (AWS, GCP, Azure) to support data and ML operations. Optimize and scale ML and data workflows to handle large-scale datasets. Set up and manage monitoring tools for infrastructure and application performance. Collaboration and Best Practices: Work closely with data science, software development, and product teams to understand project needs and optimize model performance. Develop and document best practices, guidelines, and protocols for ML lifecycle management. Interested candidates, please share your resume at sunitas@pinnacleinfotech.com Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Mohali district, India
On-site
Experience Required - 5+ years Location - Chennai Skills & Competencies: Ruby on Rails: Extensive experience with Ruby on Rails and common libraries such as RSpec and Resque. JavaScript: Proficiency in JavaScript and familiarity with frameworks like AngularJS or ReactJS. Linux: Strong knowledge of Linux operating systems, including system administration and shell scripting. Docker: Experience with Docker for containerization and orchestration of applications. Version Control: Proficiency with version control tools such as Git. Front-End Technologies: Basic understanding of front-end technologies, including HTML5, CSS3, and JavaScript. Problem-Solving: Strong analytical and problem-solving skills. Communication: Excellent communication and teamwork skills. Show more Show less
Posted 5 days ago
0.0 - 4.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Position: Software Developer Location: Noida, Uttar Pradesh, India Experience: 3–4 Years Employment Type: Full-Time 1. About Us We are a leading product development company focused on building scalable, secure, and cloud-native solutions that solve real-world problems. Having successfully transitioned from a monolithic, on-premises infrastructure to a microservices-based architecture on Microsoft Azure , we are continuously investing in modern technologies to deliver high-performance enterprise products. Join us to be part of an innovative team driving digital transformation. 2. Job Summary We are seeking a talented Software Developer with a strong foundation in .NET Core and a working knowledge of cloud-native development . The ideal candidate will have experience designing and developing RESTful APIs , deploying applications to Azure , and working with containers , CI/CD pipelines , and API gateways . You will work closely with cross-functional teams to build robust back-end services that power our core product offerings. 3. Key Responsibilities Design, develop, and maintain backend services and microservices using .NET Core . Develop and consume RESTful APIs to support web and mobile applications. Containerize applications using Docker and deploy them on Kubernetes (AKS preferred) . Implement and manage APIs through Azure API Management or similar gateways. Collaborate in setting up and maintaining CI/CD pipelines using Azure DevOps , GitHub Actions, or equivalent. Monitor and optimize performance, scalability, and reliability of applications in a cloud environment. Collaborate with cross-functional teams to define requirements and deliver features in agile sprints. Follow best practices in secure coding, testing, and version control. 4. Required Skills and Qualifications Education : Bachelor’s degree in Computer Science, IT, or a related field. Experience : 3–4 years of hands-on software development experience. .NET Core : Strong experience with .NET Core (3.1, 6, or 7) for backend service development. API Development : Proven ability to design and implement RESTful APIs . Database : Solid experience with SQL Server 2022 , including complex queries and stored procedures. Containerization : Working knowledge of Docker and experience with container orchestration (e.g., Kubernetes). Cloud : Experience deploying and managing applications in Microsoft Azure . CI/CD : Familiar with CI/CD pipelines and version control using Git . API Gateway : Exposure to Azure API Management or similar tools. Agile : Practical understanding of agile methodologies and collaborative development practices. Security Awareness : Basic understanding of application security principles and authentication (OAuth, JWT). 5. Preferred Skills Frontend experience with JavaScript , Angular , or React is a plus. Experience with message brokers (e.g., Azure Service Bus, RabbitMQ). Exposure to monitoring/logging tools like Azure Monitor , Application Insights , Prometheus , or Grafana . 6. What We Offer Exposure to cutting-edge cloud technologies and modern architecture A product-first culture with room for innovation and growth Collaborative team environment and modern Agile practices Flexible working hours and hybrid options Schedule: Day shift Work Location: In-person – Sector 8, Noida, Uttar Pradesh (Preferred) Application Questions: What is your current CTC? What is your expected CTC? Education: Bachelor’s degree (Preferred) Experience: C#: 2 years (Required) .NET Core: 2 years (Required) Total work: 4 years (Preferred) Job Type: Full-time Pay: ₹300,000.00 - ₹500,000.00 per year Schedule: Day shift Work Location: In person
Posted 5 days ago
0.0 - 3.0 years
0 Lacs
Noida, Uttar Pradesh
Remote
Overview: We are seeking a skilled UX Developer with a solid background in Angular and TypeScript to join our growing team in Noida. The ideal candidate should have 2–3 years of relevant experience in UX development, with hands-on exposure to Micro Frontend (MFE) development , particularly using Angular. The candidate should also possess a working knowledge of containerization , API gateways , Azure Kubernetes Service (AKS) , and be familiar with CI/CD pipelines from a usage perspective. You will play a key role in building modular and scalable front-end components that integrate seamlessly with .NET APIs and cloud environments, while ensuring a seamless and intuitive user experience. Key Responsibilities: Micro Frontend Development: Design and develop Angular-based Micro Frontends (MFEs), ensuring modularity, reusability, and performance in cloud-hosted environments. Cloud-Readiness: Ensure Angular applications are structured for deployment and operation within Azure Cloud , including awareness of containerization strategies , AKS , and integration via API Gateways . Interface Development: Create responsive, accessible, and user-friendly interfaces using Angular and TypeScript. API Integration: Consume and integrate .NET APIs for seamless data communication between front-end and back-end systems. CI/CD Pipeline Usage: Collaborate with DevOps teams and understand the CI/CD process (e.g., triggering builds, deployments, environment validation). Prototyping & UX Collaboration: Translate Figma-based prototypes into working code, aligning with user-centered design principles. Usability Testing: Participate in usability testing and iterate on UI designs based on feedback. Documentation: Maintain clear and concise documentation for UI components, API contracts, and integration points. Required Qualifications: Experience: 2–3 years in UX/front-end development roles. Technical Skills: Strong proficiency in Angular and TypeScript . Experience developing Micro Frontends using Angular. Good understanding of .NET API integration . Awareness of containerization tools like Docker (usage, not configuration). Familiarity with API Gateways and basic knowledge of Azure Kubernetes Service (AKS) . Practical knowledge of CI/CD pipelines from a developer usage standpoint. Tools: Familiar with design tools like Figma . UI Principles: Sound knowledge of UX principles, accessibility standards, and responsive design. Communication: Strong communication and teamwork skills. Preferred Qualifications: Experience with version control systems like Git. Exposure to other front-end libraries/frameworks (React, Vue.js). Understanding of Agile development processes and workflows. Why Join Us? Flexible Work Environment: Remote working option initially, with transition to collaborative on-site office. Career Development: Opportunities for upskilling and cross-functional exposure. Innovative Culture: Be part of a forward-thinking team that embraces technology and innovation. Application Questions: What is your current CTC? What is your expected CTC? Education: Bachelor’s Degree (Preferred) Experience Required: Angular: 2 years (Required) JavaScript: 1 year (Required) API Integration: 1 year (Preferred) UI/UX Design: 3 years (Preferred) Micro Frontend (Angular): Experience/Understanding (Required) Azure Cloud Deployment (Usage/Basic Knowledge): Preferred CI/CD Pipelines: Working Knowledge (Required) Location Preference: Noida, Uttar Pradesh (Preferred) Work Location: In person (after remote transition phase) Job Type: Full-time Pay: ₹300,000.00 - ₹500,000.00 per year Schedule: Day shift Work Location: In person
Posted 5 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Experience 6+ Years Immediate/Service Notice profiles preferred Face-to-Face Interview Process Responsibilities Leads multidimensional projects that involve multiple teams. Leads and works with other software engineers on design best practices and conducts code reviews. Resolves complex engineering problems, collaborating with others. Facilitates cross-functional troubleshooting, root cause analysis, and engages others when needed. Responsible for creating, evaluating, and contributing to feature detailed designs. Design, develop, and implement software utilizing an agile project cycle. Mentor team members and raise the bar for technical knowledge across a wide spectrum. Demonstrates thorough knowledge of information technology concepts, issues, trends, and best practices related to Cloud technologies and system integrations. Apply and share knowledge of security coding practices and secure system fundamentals Skills Strong proficiency in Java (v11+) with deep expertise in object-oriented programming, concurrency, and performance optimization. Hands-on experience with Spring Boot and the Spring Ecosystem , including Spring MVC, Spring Data, and Spring Security. Proficiency in containerization technologies such as Docker and orchestration tools like Kubernetes . Experience with RESTful architecture and microservices development, including API design, security, and scalability-related best practices. Strong knowledge of relational databases (PostgreSQL, MySQL, Oracle, etc.) and proficiency in writing efficient SQL queries and stored procedures. Experience working with cloud-based services such as AWS, GCP, or Azure , including serverless computing and managed database services. Familiarity with CI/CD methodologies and tools such as Jenkins, GitHub Actions, or GitLab CI/CD to automate build, test, and deployment pipelines. Experience with messaging and event-driven architectures ; knowledge of Kafka or RabbitMQ is a plus. Experience integrating with financial systems (e.g., Anaplan, Oracle Financials) is a plus. Strong problem-solving skills with a focus on writing clean, maintainable, and well-tested code. Excellent communication skills (verbal and written) and the ability to collaborate effectively with cross-functional teams. 5+ years of professional experience in backend development, preferably in enterprise or high-scale environments. Bachelor’s or Master’s in Computer Science, Engineering, or equivalent practical experience. Show more Show less
Posted 5 days ago
10.0 years
0 Lacs
India
Remote
Job Title: GenAI Architect Experience: 8–10 Years Location: Remote Job Type: Contract Job Summary: We are looking for a highly experienced GenAI Architect to lead the design and implementation of Generative AI solutions. The ideal candidate will have a strong background in AI/ML, hands-on experience with LLMs, and the ability to architect scalable and secure AI platforms. Key Responsibilities: Design end-to-end architectures for Generative AI systems Lead model fine-tuning, prompt engineering, and deployment strategies Collaborate with data scientists, engineers, and business teams to define AI solutions Implement and optimize LLM pipelines using tools like LangChain, Transformers, and vector databases Ensure security, scalability, and performance of AI solutions Stay up to date with the latest in GenAI, LLMs, and AI infrastructure Must-Have Skills: Strong experience with LLMs (e.g., GPT, LLaMA, Claude) Python, Transformers, LangChain Experience with vector databases (e.g., FAISS, Pinecone, ChromaDB) Cloud platforms (Azure, AWS, or GCP) Knowledge of CI/CD, MLOps, and containerization (Docker/Kubernetes) Nice to Have: Experience in building AI chatbots and autonomous agents Exposure to frameworks like LangGraph or AutoGPT Prior experience in enterprise AI product development Show more Show less
Posted 5 days ago
0 years
0 Lacs
India
On-site
Job Title: Sr. Azure Cloud Engineer Location: India We are seeking an experienced Azure Cloud Engineer who specializes in migrating and modernizing applications to the cloud. The ideal candidate will have deep expertise in Azure Cloud, Terraform (Enterprise), containers (Docker), Kubernetes (AKS), CI/CD with GitHub Actions, and Python scripting . Strong soft skills are essential to communicate effectively with technical and non-technical stakeholders during migration and modernization projects. Key Responsibilities: Lead and execute the migration and modernization of applications to Azure Cloud using containerization and re-platforming Re-platform, optimize, and manage containerized applications using Docker and orchestrate through Azure Kubernetes Service (AKS) Implement and maintain robust CI/CD pipelines using GitHub Actions to facilitate seamless application migration and deployment Automate infrastructure and application deployments to ensure consistent, reliable, and scalable cloud environments Write Python scripts to support migration automation, integration tasks, and tooling Collaborate closely with cross-functional teams to ensure successful application migration, modernization, and adoption of cloud solutions Define and implement best practices for DevOps, security, migration strategies, and the software development lifecycle (SDLC) Infrastructure deployment via Terraform (IAM, networking, security, etc) Non-Functional Responsibilities: Configure and manage comprehensive logging, monitoring, and observability solutions Develop, test, and maintain Disaster Recovery (DR) plans and backup solutions to ensure cloud resilience Ensure adherence to all applicable non-functional requirements, including performance, scalability, reliability, and security during migrations Required Skills and Experience: Expert-level proficiency in migrating and modernizing applications to Microsoft Azure Cloud services Strong expertise in Terraform (Enterprise) for infrastructure automation Proven experience with containerization technologies (Docker) and orchestration platforms (AKS) Extensive hands-on experience with GitHub Actions and building CI/CD pipelines specifically for cloud migration and modernization efforts Proficient scripting skills in Python for automation and tooling Comprehensive understanding of DevOps methodologies and software development lifecycle (SDLC) Excellent communication, interpersonal, and collaboration skills Demonstrable experience in implementing logging, monitoring, backups, and disaster recovery solutions within cloud environments Show more Show less
Posted 5 days ago
3.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Responsibilities • Involved in the complete product lifecycle from initial requirements definition, design, development, and solution configuration through to deployment. • Agile approach using Behaviour Driven Development and Continuous Deployment technologies • Support on-going maintenance and fixes of SmartStream’ s solutions and in-house toolkits. • Follow SmartStream’ s development process and quality standards. • Responsibility for developing scalable and robust solutions, which meet the high performance and availability standards of global financial institutions. Key Skills • Professional experience with Java 8/11 and Angular • Git, Maven, NPM, Jenkins Pipelines • Spring Framework and Spring Boot Microservices • REST, Swagger/OpenAPI • Practical experience of SQL in relational databases like Oracle/ SQL Server and Application Server Middleware • Good communication skills Desirable Skills • Experience in Containerization viz. Docker, Kubernetes Qualifications • Engineering Graduate with Computer Science / Information Technology background or similar Experience • 3-5 years of software engineering experience • Experience developing in a software vendor environment is desirable but not required • Financial software experience would be a bonus, but is not expected • Experience as a junior software engineer or junior developer is a must Show more Show less
Posted 5 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role: Java + NodeJS Developer Location: PAN India Experience Preferred: 4+ Years Job Description: Design, develop, and maintain scalable backend services and RESTful APIs using Java and Node.js . Write clean, modular, and reusable code following best practices. Work with microservices architecture and cloud platforms (AWS/Azure/GCP). Integrate with frontend frameworks and third-party APIs/services. Optimize applications for speed, scalability, and reliability. Collaborate with DevOps for CI/CD pipeline, deployment, and monitoring. Participate in code reviews and technical discussions. Troubleshoot, debug, and resolve production issues. Ensure security and data protection in application design. Document technical specifications and processes. Required Skills and Qualifications: Strong programming skills in Java (Spring Boot) and Node.js (Express/NestJS) . Proficiency in relational and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB). Experience with RESTful APIs and API documentation tools (Swagger/OpenAPI). Familiarity with version control systems (Git). Understanding of asynchronous programming and event-driven architecture. Good knowledge of unit testing and integration testing (e.g., JUnit, Mocha, Jest). Experience with message brokers (e.g., Kafka, RabbitMQ) is a plus. Exposure to containerization (Docker, Kubernetes) is an advantage. Strong analytical, problem-solving, and communication skills. Show more Show less
Posted 5 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role Summary Pfizer’s purpose is to deliver breakthroughs that change patients’ lives. Research and Development is at the heart of fulfilling Pfizer’s purpose as we work to translate advanced science and technologies into the therapies and vaccines that matter most. Whether you are in the discovery sciences, ensuring drug safety and efficacy or supporting clinical trials, you will apply cutting edge design and process development capabilities to accelerate and bring the best in class medicines to patients around the world. Pfizer is seeking a highly skilled and motivated AI Engineer to join our advanced technology team. The successful candidate will be responsible for developing, implementing, and optimizing artificial intelligence models and algorithms to drive innovation and efficiency in our Data Analytics and Supply Chain solutions. This role demands a collaborative mindset, a passion for cutting-edge technology, and a commitment to improving patient outcomes. Role Responsibilities Lead data modeling and engineering efforts within advanced data platforms teams to achieve digital outcomes. Provides guidance and may lead/co-lead moderately complex projects. Oversee the development and execution of test plans, creation of test scripts, and thorough data validation processes. Lead the architecture, design, and implementation of Cloud Data Lake, Data Warehouse, Data Marts, and Data APIs. Lead the development of complex data products that benefit PGS and ensure reusability across the enterprise. Collaborate effectively with contractors to deliver technical enhancements. Oversee the development of automated systems for building, testing, monitoring, and deploying ETL data pipelines within a continuous integration environment. Collaborate with backend engineering teams to analyze data, enhancing its quality and consistency. Conduct root cause analysis and address production data issues. Lead the design, develop, and implement AI models and algorithms to solve sophisticated data analytics and supply chain initiatives. Stay abreast of the latest advancements in AI and machine learning technologies and apply them to Pfizer's projects. Provide technical expertise and guidance to team members and stakeholders on AI-related initiatives. Document and present findings, methodologies, and project outcomes to various stakeholders. Integrate and collaborate with different technical teams across Digital to drive overall implementation and delivery. Ability to work with large and complex datasets, including data cleaning, preprocessing, and feature selection. Basic Qualifications A bachelor's or master’s degree in computer science, Artificial Intelligence, Machine Learning, or a related discipline. Over 4 years of experience as a Data Engineer, Data Architect, or in Data Warehousing, Data Modeling, and Data Transformations. Over 2 years of experience in AI, machine learning, and large language models (LLMs) development and deployment. Proven track record of successfully implementing AI solutions in a healthcare or pharmaceutical setting is preferred. Strong understanding of data structures, algorithms, and software design principles Programming Languages: Proficiency in Python, SQL, and familiarity with Java or Scala AI and Automation: Knowledge of AI-driven tools for data pipeline automation, such as Apache Airflow or Prefect. Ability to use GenAI or Agents to augment data engineering practices Preferred Qualifications Data Warehousing: Experience with data warehousing solutions such as Amazon Redshift, Google BigQuery, or Snowflake. ETL Tools: Knowledge of ETL tools like Apache NiFi, Talend, or Informatica. Big Data Technologies: Familiarity with Hadoop, Spark, and Kafka for big data processing. Cloud Platforms: Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP). Containerization: Understanding of Docker and Kubernetes for containerization and orchestration. Data Integration: Skills in integrating data from various sources, including APIs, databases, and external files. Data Modeling: Understanding of data modeling and database design principles, including graph technologies like Neo4j or Amazon Neptune. Structured Data: Proficiency in handling structured data from relational databases, data warehouses, and spreadsheets. Unstructured Data: Experience with unstructured data sources such as text, images, and log files, and tools like Apache Solr or Elasticsearch. Data Excellence: Familiarity with data excellence concepts, including data governance, data quality management, and data stewardship. Non-standard Work Schedule, Travel Or Environment Requirements Occasionally travel required Work Location Assignment: Hybrid The annual base salary for this position ranges from $96,300.00 to $160,500.00. In addition, this position is eligible for participation in Pfizer’s Global Performance Plan with a bonus target of 12.5% of the base salary and eligibility to participate in our share based long term incentive program. We offer comprehensive and generous benefits and programs to help our colleagues lead healthy lives and to support each of life’s moments. Benefits offered include a 401(k) plan with Pfizer Matching Contributions and an additional Pfizer Retirement Savings Contribution, paid vacation, holiday and personal days, paid caregiver/parental and medical leave, and health benefits to include medical, prescription drug, dental and vision coverage. Learn more at Pfizer Candidate Site – U.S. Benefits | (uscandidates.mypfizerbenefits.com). Pfizer compensation structures and benefit packages are aligned based on the location of hire. The United States salary range provided does not apply to Tampa, FL or any location outside of the United States. Relocation assistance may be available based on business needs and/or eligibility. Sunshine Act Pfizer reports payments and other transfers of value to health care providers as required by federal and state transparency laws and implementing regulations. These laws and regulations require Pfizer to provide government agencies with information such as a health care provider’s name, address and the type of payments or other value received, generally for public disclosure. Subject to further legal review and statutory or regulatory clarification, which Pfizer intends to pursue, reimbursement of recruiting expenses for licensed physicians may constitute a reportable transfer of value under the federal transparency law commonly known as the Sunshine Act. Therefore, if you are a licensed physician who incurs recruiting expenses as a result of interviewing with Pfizer that we pay or reimburse, your name, address and the amount of payments made currently will be reported to the government. If you have questions regarding this matter, please do not hesitate to contact your Talent Acquisition representative. EEO & Employment Eligibility Pfizer is committed to equal opportunity in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, disability or veteran status. Pfizer also complies with all applicable national, state and local laws governing nondiscrimination in employment as well as work authorization and employment eligibility verification requirements of the Immigration and Nationality Act and IRCA. Pfizer is an E-Verify employer. This position requires permanent work authorization in the United States. Information & Business Tech Show more Show less
Posted 5 days ago
4.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
We are seeking a highly skilled Product Data Engineer with expertise in building, maintaining, and optimizing data pipelines using Python scripting. The ideal candidate will have experience working in a Linux environment, managing large-scale data ingestion, processing files in S3, and balancing disk space and warehouse storage efficiently. This role will be responsible for ensuring seamless data movement across systems while maintaining performance, scalability, and reliability. Key Responsibilities: ETL Pipeline Development: Design, develop, and maintain efficient ETL workflows using Python to extract, transform, and load data into structured data warehouses. Data Pipeline Optimization: Monitor and optimize data pipeline performance, ensuring scalability and reliability in handling large data volumes. Linux Server Management: Work in a Linux-based environment, executing command-line operations, managing processes, and troubleshooting system performance issues. File Handling & Storage Management: Efficiently manage data files in Amazon S3, ensuring proper storage organization, retrieval, and archiving of data. Disk Space & Warehouse Balancing: Proactively monitor and manage disk space usage, preventing storage bottlenecks and ensuring warehouse efficiency. Error Handling & Logging: Implement robust error-handling mechanisms and logging systems to monitor data pipeline health. Automation & Scheduling: Automate ETL processes using cron jobs, Airflow, or other workflow orchestration tools. Data Quality & Validation: Ensure data integrity and consistency by implementing validation checks and reconciliation processes. Security & Compliance: Follow best practices in data security, access control, and compliance while handling sensitive data. Collaboration with Teams: Work closely with data engineers, analysts, and product teams to align data processing with business needs. Skills Required: Proficiency in Python: Strong hands-on experience in writing Python scripts for ETL processes. Linux Expertise: Experience working with Linux servers, command-line operations, and system performance tuning. Cloud Storage Management: Hands-on experience with Amazon S3, including handling file storage, retrieval, and lifecycle policies. Data Pipeline Management: Experience with ETL frameworks, data pipeline automation, and workflow scheduling (e.g., Apache Airflow, Luigi, or Prefect). SQL & Database Handling: Strong SQL skills for data extraction, transformation, and loading into relational databases and data warehouses. Disk Space & Storage Optimization: Ability to manage disk space efficiently, balancing usage across different systems. Error Handling & Debugging: Strong problem-solving skills to troubleshoot ETL failures, debug logs, and resolve data inconsistencies. Nice to Have: Experience with cloud data warehouses (e.g., Snowflake, Redshift, BigQuery). Knowledge of message queues (Kafka, RabbitMQ) for data streaming. Familiarity with containerization tools (Docker, Kubernetes) for deployment. Exposure to infrastructure automation tools (Terraform, Ansible). Qualifications: Bachelor’s degree in Computer Science, Data Engineering, or a related field. 4+ years of experience in ETL development, data pipeline management, or backend data engineering. Strong analytical mindset and ability to handle large-scale data processing efficiently. Ability to work independently in a fast-paced, product-driven environment. Show more Show less
Posted 5 days ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role: Python + microservices Experience range: 8-10 years Location: Current location must be Bangalore NOTE: Candidate interested for Walk-in drive in Bangalore must apply Job description: Preferred Qualifications: Experience with cloud platforms is a plus. Familiarity with Python frameworks (Flask, FastAPI, Django). Understanding of DevOps practices and tools (Terraform, Jenkins). Knowledge of monitoring and logging tools (Prometheus, Grafana, Stackdriver). Requirements: Proven experience as a Python developer, specifically in developing microservices. Strong understanding of containerization and orchestration (Docker, Kubernetes). Experience with Google Cloud Platform, specifically Cloud Run, Cloud Functions, and other related services. Familiarity with RESTful APIs and microservices architecture. Knowledge of database technologies (SQL and NoSQL) and data modelling. Proficiency in version control systems (Git). Experience with CI/CD tools and practices. Strong problem-solving skills and the ability to work independently and collaboratively. Excellent communication skills, both verbal and written. Show more Show less
Posted 5 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities: Architect and Design: Lead the architecture and design of both backend and frontend applications, ensuring scalability, maintainability, and high performance. Full Stack Development: Write high-quality, clean, and efficient code for both client-side and server-side applications. Develop end-to-end solutions from user interface to database. Cloud Technologies: Utilize cloud platforms (AWS, Azure, Google Cloud) for scalable and robust application infrastructure. Design cloud-native applications and microservices to meet business requirements. System Integration: Integrate various backend systems with frontend components, APIs, third-party services, and databases, ensuring seamless data flow and high availability. Technical Leadership: Mentor junior and mid-level developers, provide technical direction and guidance on software architecture, design patterns, and coding best practices. Code Reviews: Lead and participate in peer code reviews to ensure code quality, consistency, and adherence to best practices. Optimization & Performance: Monitor and optimize application performance for both frontend and backend. Ensure fast load times, scalability, and high uptime. Collaboration: Work closely with product managers, UI/UX designers, and other developers to gather requirements, provide feedback, and deliver high-quality software. Continuous Learning: Stay up to date with the latest industry trends, technologies, and best practices. Advocate for improvements in architecture and development processes. Technical Skills & Expertise: Backend: Proven expertise in backend technologies such as Node.js, Java, Python, Ruby, Go, or similar. Frontend: Strong experience with React.js, Angular, Vue.js, or other modern JavaScript frameworks. Knowledge of HTML5, CSS3, and SASS/LESS. Databases: Deep knowledge of SQL (PostgreSQL, MySQL, etc.) and NoSQL (MongoDB, Cassandra, etc.) databases. Cloud Computing: Hands-on experience with major cloud platforms, such as AWS, Azure, or Google Cloud. Familiarity with serverless architecture and cloud services (e.g., Lambda, S3, EC2, Kubernetes, Docker). APIs: Expertise in designing and integrating RESTful APIs and GraphQL. Experience with API security and authentication protocols (OAuth, JWT). DevOps: Experience with CI/CD pipelines, version control (Git), and automation tools (Jenkins, CircleCI, etc.). Microservices & Containerization: Understanding of microservices architecture, Docker, and Kubernetes for scalable and maintainable applications. Testing & Quality Assurance: Familiar with unit testing, integration testing, and test-driven development (TDD). Tools like Jest, Mocha, JUnit, Selenium. Version Control: Advanced Git knowledge, including branching, merging, and handling conflicts. Agile Methodology: Experience working in an Agile/Scrum environment, contributing to sprint planning and retrospectives. Show more Show less
Posted 5 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position: AI/ML Engineer (VFX Workflow & Infrastructure) Location: Hyderabad, India Role Overview The AI/ML Engineer will spearhead the development and integration of artificial intelligence and machine learning solutions into our VFX production pipeline. You’ll work closely with pipeline developers, production managers, and creative teams to reimagine workflows, automate repetitive tasks, and push the boundaries of innovation in photorealistic rendering, stylized animation, and other CGI processes. This role combines deep technical expertise with a strong understanding of the demands of a fast-paced VFX/animation studio. Key Responsibilities AI Strategy & Roadmap: Develop and maintain a strategic plan for implementing AI/ML across the VFX pipeline—covering data wrangling, rendering, asset management, animation, and post-production. Identify new approaches to innovate current workflows to increase efficiency and cut down on time. Algorithm & Model Development: Research, design, and implement ML models (e.g., computer vision, generative models, style transfer) that improve artist efficiency, production speed, enhance image quality, or enable new creative possibilities. Optimize models for performance on local GPU/CPU clusters or cloud-based infrastructures. Pipeline Integration & Automation: Collaborate with pipeline engineers to seamlessly integrate AI agents or tools into existing software stacks (e.g., Maya, Houdini, Nuke), ensuring minimal disruption to artists’ workflows. Develop automated solutions for tasks like rotoscoping, clean-up, crowd simulation, environment generation, or facial capture/animation. Infrastructure & Tooling: Architect and maintain robust data pipelines, ensuring the secure collection and organization of high-quality datasets for training AI models. Evaluate and deploy containerization/MLOps tools (Docker, Kubernetes, MLflow, etc.) for scalable model training, inference, and monitoring. Performance Optimization: Profile model performance, memory usage, and render times; implement optimizations in frameworks such as TensorFlow, PyTorch, or custom GPU pipelines. Work with DevOps/IT teams to configure and manage dedicated GPU farms or cloud compute resources. Research & Development: Stay updated with state-of-the-art ML/DL techniques, particularly in generative AI, computer vision, and real-time rendering. Introduce emerging methods (e.g., stable diffusion, large language models, neural rendering) to innovate new production techniques. Documentation & Reporting: Create clear technical documentation for AI solutions, ensuring maintainability and scalability. Present progress, insights, and ROI to executive leadership, project stakeholders, and cross-functional teams. Qualifications & Skills Bachelor’s or Master’s degree in Computer Science, AI/ML, or related field. A PhD is a bonus but not mandatory. 5+ years of professional experience in applied machine learning or data science, with at least 2 years in a lead/managerial role. Previous experience in VFX, animation, gaming, or related entertainment industries is a bonus. Programming: Expert-level Python (C++ is a plus). ML Frameworks: Deep understanding of TensorFlow, PyTorch, scikit-learn, or similar libraries. Computer Vision & Generative Models: Familiarity with CNNs, GANs, autoencoders, stable diffusion, or neural radiance fields. Pipeline Tools: Experience with integration in VFX software (Maya, Houdini, Nuke) and plugin APIs.[Optional] DevOps & MLOps: Comfortable with containerization (Docker), orchestration (Kubernetes), CI/CD, and cloud platforms (AWS, Azure, GCP). Proven track record of translating production challenges into AI/ML solutions that deliver measurable efficiency gains or cost savings. Experience with model optimization (quantization, pruning) and GPU/CPU performance tuning. Collaboration: Excellent communication to bridge technical and creative teams, explaining complex concepts in clear, accessible language. Leadership: Ability to mentor junior engineers and foster a culture of experimentation and continuous learning. Agility: Adapts quickly to evolving project needs, production pipelines, and new AI techniques. A genuine interest in cinema, animation, or gaming—a plus if you have prior knowledge of the Baahubali IP or similar large-scale IPs. Creativity in applying AI to artistic challenges, from photorealistic digital humans to stylized animated sequences. What We Offer Impactful Role: Shape the future of VFX and animation filmmaking and leave a lasting mark on flagship studio projects. Career Growth: Lead a growing AI team, collaborate with top-tier VFX artists, and gain exposure to cutting-edge tech. Competitive Compensation: Salary, benefits, and potential for performance-based bonuses. Innovative Environment: Access to advanced hardware, robust R&D budget, and the opportunity to experiment with emerging AI trends. Show more Show less
Posted 5 days ago
6.0 years
0 Lacs
India
Remote
Position Overview We seek a talented Frontend First Full-Stack Engineer with 6+ years of experience to join our dynamic software consulting team. As a Full-Stack Engineer, you will be responsible for developing and implementing high-quality software solutions for our clients, working on both front-end and back-end aspects of projects. Primary Skill Sets Frontend: React.js, TypeScript, Tan Stack Query, React Render Flow, Next.js Backend: Node.js, Express.js, Nest JS, Sequelize ORM , Server-Sent Events (SSE), WebSocket’s, Event Emitter. Stylesheets : MUI, Tailwind Secondary Skill Sets Messaging Systems: Apache Kafka, AWS SQS, RabbitMQ Containerization & Orchestration: Docker, Kubernetes (Bonus) Databases & Caching: Redis, Elasticsearch, MySQL, PostgreSQL Bonus Experience - Proven experience in building Agentic UX to enable intelligent, goal-driven user interactions. - Hands-on expertise in designing and implementing complex workflow management systems. - Developed Business Process Management (BPM) platforms and dynamic application builders for configurable enterprise solutions. Show more Show less
Posted 5 days ago
6.0 years
0 Lacs
India
Remote
Experience: 6 to 7 years experience Work Mode: Remote Preferred Notice Period: Immediate to 10 days (Candidate has to join by June 23rd) Mandatory Skills: Magento, API Mesh & Mirakl Interview Rounds: 2 Technical rounds + HR round Must-Have Skills Strong hands-on experience in Adobe Commerce custom module development. Strong hands-on experience with Mirakl marketplace platform integrations. Proven experience with Adobe App Builder and API Mesh (Adobe I/O Runtime). Solid understanding of RESTful and GraphQL APIs, OAuth, and token-based authentication. Experience in headless commerce implementations and decoupled architecture. Knowledge of multi-vendor workflows, product catalog synchronization, pricing, order management, and inventory integrations. Nice-to-Have Skills Familiarity with CI/CD pipelines, containerization , and monitoring tools. Experience working with third-party payment, logistics, and tax services. Previous work in mobile-first commerce ecosystems. Skills: containerization,multi-vendor workflows,mesh,restful apis,ci/cd pipelines,adobe i/o runtime,mobile-first commerce ecosystems,magento,tax services,adobe commerce custom module development,adobe app builder,order management,product catalog synchronization,monitoring tools,api,third-party payment services,oauth,mirakl,inventory integrations,adobe,pricing,headless commerce implementations,logistics services,graphql apis,decoupled architecture,api mesh,graphql,token-based authentication Show more Show less
Posted 5 days ago
6.0 years
0 Lacs
India
Remote
Role: Magento Developer 6 to 7years experience Location: Remote Notice Period: Immediate to 10 days (Candidate has to join by June 23rd) salary: 1,50,000 to 1,60,000 per month Interview Rounds: 2 Technical rounds + HR round ( 3 Rounds including screening ) Mandatory Skills: Magento, API Mesh & Mirakl experience. Must-Have Skills: Strong hands-on experience in Adobe Commerce custom module development. Strong hands-on experience with Mirakl marketplace platform integrations. Proven experience with Adobe App Builder and API Mesh (Adobe I/O Runtime). Solid understanding of RESTful and GraphQL APIs, OAuth, and token-based authentication. Experience in headless commerce implementations and decoupled architecture. Knowledge of multi-vendor workflows, product catalog synchronization, pricing, order management, and inventory integrations. Nice-to-Have Skills: Familiarity with CI/CD pipelines, containerization , and monitoring tools. Experience working with third-party payment, logistics, and tax services. Previous work in mobile-first commerce ecosystems. Role: Magento Developer 6 to 7years experience Location: Remote Notice Period: Immediate to 10 days (Candidate has to join by June 23rd) salary: 1,50,000 to 1,60,000 per month Interview Rounds: 2 Technical rounds + HR round ( 3 Rounds including screening ) Mandatory Skills: Magento, API Mesh & Mirakl experience. Nice-to-Have Skills: Familiarity with CI/CD pipelines, containerization , and monitoring tools. Experience working with third-party payment, logistics, and tax services. Previous work in mobile-first commerce ecosystems. Skills: app builder,ci/cd pipelines,adobe app builder,restful apis,headless commerce implementations,monitoring tools,api mesh,adobe i/o runtime,pricing,tax services,magento,token-based authentication,mobile-first commerce ecosystems,adobe,third-party payment services,graphql apis,product catalog synchronization,logistics services,mirakl,oauth,commerce,mesh,api,order management,containerization,multi-vendor workflows,decoupled architecture,inventory integrations,adobe commerce custom module development Show more Show less
Posted 5 days ago
1.0 years
0 Lacs
India
Remote
About the job 🚀 AI Fullstack Engineer (Mid-Level) Location: Remote (Brasilia Time UTC-3 or Dubai Time GMT+3 preferred) | Team: Tech team for Product Engineering Seniority: 1 – 3 yrs production experience | Stack: Python / TypeScript / React + LLMs 👋 About Us - Reimagining Venture Building with AI at the Core We're Mundos, the world's first AI-native Venture Builder architecting the next generation of intelligent, high-impact businesses. Unlike traditional incubators or studios, we embed advanced AI capabilities from day zero, transforming how ventures are conceived, built, and scaled globally. We operate at the convergence of visionary strategy and technical execution; identifying opportunities not visible to the naked eye, then rapidly materializing them through our proprietary AI venture building methodology and fast-paced engineering muscle. Working alongside forward-thinking partners across MENA and LATAM, we're not just implementing AI; we're fundamentally rethinking business models around AI's capabilities. While others talk about AI transformation, we're already shipping it: moving with startup velocity but maintaining institutional-grade discipline and quality and seamless user experiences. Our globally distributed team unites serial entrepreneurs, AI researchers, and seasoned operators who share one trait: the ability to translate cutting-edge AI capabilities into tangible business impact. We're seeking a versatile software engineer who thrives in high-velocity environments, ships production-ready code across the full stack, and is eager to help architect the future of AI-powered applications, and grow into an AI-powered engineering team. 👩💻 What You’ll BuildArchitect & Build: Create robust RESTful/GraphQL APIs that power both internal tools and customer-facing applications in our venture portfolio AI Integration: Implement and optimize RAG pipelines, vector DB integrations, PostgreSQL, Redis, external APIs and LLM orchestration layers that deliver intelligence, not just responses Full-Stack Mastery: Own feature development from back-end logic to polished React UIs (TypeScript/Javascript), balancing technical elegance with business velocity Team Collaboration: Work directly with our founding AI engineer and senior engineering leadership while mentoring junior talent—we grow together, and deliver fast iterations Agile Execution: Drive from sprint planning to deployment, with ownership across the entire development lifecycle. Write clean LLDs and participate in sprint planning, code reviews, and deployment automation Infrastructure Evolution: Deploy and manage services using Docker and cloud infrastructure (AWS/GCP) 🧠 Your Toolkit Production Impact: 1-3 years building software that real users depend on (not just internships or side projects) Technical Foundation: Solid understanding of API design principles, database architecture and schema design, and error handling Data Expertise: Experience with PostgreSQL/MySQL and performance optimization with key-value stores like Redis Modern Architecture: Hands-on with event-driven systems, message queues (Kafka/RabbitMQ), or serverless functions AI Fluency: Working knowledge of LLM integration using both closed and open-source models—you understand prompts and parameters, not just APIs Frontend Proficiency: Comfort with React hooks, state management solutions (Redux/Zustand), and component libraries that deliver pixel-perfect experiences Cloud-Native Thinking: Familiarity with containerization, CI/CD pipelines, and infrastructure-as-code approaches (GCP, AWS or Azure) Ownership Mindset: You don't just build it—you own it, monitor it, and continuously improve and iterate it 🌟 Bonus Points AI Engineering Experience: Built or contributed to RAG pipelines, AI agents, LangGraph implementations, or LlamaIndex applications AI-Adjacent Projects: Developed chatbots, NLP tools, data pipelines, recommendation systems, or other ML-enhanced applications Venture Building Spirit: Experience in fast-paced environments where you wear multiple hats and contribute beyond your job description 🌱 Why You’ll Enjoy This RoleEngineering Excellence: We prioritize robust, maintainable code over glossy demos; real engineering for real business impact True Ownership: You won't just be implementing specs; you'll help shape our technical direction and architecture Remote-First Culture: Work where and when you're most productive, with async-first communication and results-oriented leadership Velocity Without Chaos: We move quickly but deliberately, with proper planning and sustainable pace 💡 Why Join Mundos Venture Building DNA: Your code doesn't just ship features; it builds entire businesses that can scale independently Small team, huge canvas: your code lands in production within days, not quarters. Global Impact: Work on ventures that span multiple markets, cultures, and business models Exponential Learning: Exposure to multiple ventures means accelerated growth across domains and technologies Founder-Level Opportunities: Early team members grow into leadership roles as our ventures mature Competitive Compensation: USD salary, equity (ESOPs) in our venture ecosystem, flexible remote work, and a clearly defined growth trajectory Note: This is a contract based opportunity that can be extended to full time hires. 📬 How to ApplySend your resume (required), a thoughtful cover letter (required) and GitHub profile (required) to anish.yog10@gmail.com Tell us in two sentences about a feature you shipped that made users smile (required). Incomplete applications will not be considered. We value attention to detail as much as technical skill. Join us in building the next generation of AI-native ventures—where technical excellence meets entrepreneurial vision to solve meaningful problems at global scale Show more Show less
Posted 5 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Ciklum is looking for a .NET Developer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As a .NET Developer, become a part of a cross-functional development team engineering experiences of tomorrow. Client for this project is a leading global provider of audit and assurance, consulting, financial advisory, risk advisory, tax, and related services. They are launching a digital transformation project to evaluate existing technology across the tax lifecycle and determine the best future state for that technology. This will include decomposing existing assets to determine functionality, assessment of those functionalities to determine the appropriate end state and building of new technologies to replace those functionalities Responsibilities: Back-end development of new functionality Participating in code and code reviews, documenting the coding and if possible focus on Architecture Communicating with stakeholders: PMs, developers, architects, QA engineers and other colleagues Proactive position in solution development, processes improvements Delivering the product roadmap and planning for the future Handle complex problems that might arise during solution development and provide field support with creative and rapid solutions Ensure that the highest coding standards are met and write highly testable, automatable and performant code over the whole SDLC Requirements: More than 4 years of experience in commercial software development Excellent knowledge of computer science and computing theory: OOP, DDD, SOLID, TDD, BDD Database theory (RDBMS, NoSQL) Algorithms and data structures Design, and knowledge on architectural and enterprise patterns Understanding of network protocols and conventions (e.g. HTTP, REST), authentication and authorization flows and practices Experience with NoSQL (e.g. Mongodb, DynamoDB) Knowledge of key-value storages (e.g. Redis, Memcached) Basic knowledge of containerization and orchestration (Docker, Kubernetes) Excellent knowledge and experience with C# and .NET Commercial experience with: .NET Framework, .NET Core, ASP.NET (Core, MVC, WebAPI) ORM (e.g. Entity Framework, Dapper) RDBMS (especially SQL Server) Messaging systems (e.g. RabbitMQ, ServiceBus) Cloud providers (e.g. Azure) Testing frameworks (e.g. NUnit, XUnit, MSTest) Web Servers Version control systems (e.g. GIT) Upper-intermediate English or above Desirable: Experience with Search Engines (e.g. ElasticSearch, Azure Search) Experience with REST API development for mobile application Experience with integration with 3rd party solutions Personal skills: Ability to relate positively to and engage with a wide range of people Strong self-motivation, reliable and flexible team-player. High attention to details Always seeking to improve processes and suggesting alternative better solutions Be ready to embrace change, be flexible Ability and willingness to mentor more junior team members What`s in it for you? Care: your mental and physical health is our priority. We ensure comprehensive company-paid medical insurance, as well as financial and legal consultation Tailored education path: boost your skills and knowledge with our regular internal events (meetups, conferences, workshops), Udemy licence, language courses and company-paid certifications Growth environment: share your experience and level up your expertise with a community of skilled professionals, locally and globally Flexibility: hybrid work mode at Chennai or Pune Opportunities: we value our specialists and always find the best options for them. Our Resourcing Team helps change a project if needed to help you grow, excel professionally and fulfil your potential Global impact: work on large-scale projects that redefine industries with international and fast-growing clients Welcoming environment: feel empowered with a friendly team, open-door policy, informal atmosphere within the company and regular team-building events About us: India is a strategic growth market for Ciklum. Be a part of a big story created right now. Let’s grow our delivery center in India together! Boost your skills and knowledge: create and innovate with like-minded professionals — all of that within a global company with a local spirit and start-up soul. Supported by Recognize Partners and expanding globally, we will engineer the experiences of tomorrow! Be bold, not bored! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum. Show more Show less
Posted 5 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are looking for a highly skilled and experienced Senior Backend Developer to join our team. The ideal candidate will have a strong background in Python, FastAPI, MongoDB, and AWS, and proven experience building scalable backend solutions capable of handling thousands of concurrent users. You will play a key role in the design, development, and maintenance of core backend systems, and lead workstreams involving junior developers. Key Responsibilities Design, develop, and maintain scalable backend services using Python and FastAPI. Architect and implement solutions that ensure high availability, performance, and security. Work with MongoDB for schema design, optimization, and querying large-scale data efficiently. Leverage AWS services to build and deploy robust cloud-native applications. Own and drive technical workstreams, ensuring timely delivery and quality. Collaborate with frontend, product, and QA teams to define and refine system requirements. Mentor junior developers, conduct code reviews, and promote best practices in software engineering. Monitor system performance, debug issues, and continuously optimize backend infrastructure. Required Skills & Qualifications 3+ years of backend development experience with Python, with strong command of FastAPI. Proficient in MongoDB, with hands-on experience in designing and scaling NoSQL databases. Solid experience working with AWS cloud services (e.g., EC2, S3, Lambda, API Gateway, etc.). Demonstrated ability to design and support scalable solutions handling thousands of concurrent users. Strong understanding of software architecture, RESTful API design, and microservices. Excellent communication and collaboration skills. Experience leading projects or small teams of developers; ability to manage tasks and timelines effectively. Preferred Qualifications Experience with containerization tools like Docker and orchestration with Kubernetes. Familiarity with CI/CD pipelines and tools like GitHub Actions, Jenkins, or CircleCI. Exposure to observability tools (e.g., Prometheus, Grafana, CloudWatch) and logging frameworks. Knowledge of security best practices in API and cloud infrastructure development. APPLY NOW Show more Show less
Posted 5 days ago
12.0 - 16.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
GEN AI Architect Job Summary: We are seeking a highly skilled and experienced GEN AI Architect to lead the design, development, and deployment of generative AI solutions. The ideal candidate will have a strong background in artificial intelligence, machine learning, and deep learning, with a proven track record in building and implementing AI models and systems. This role requires a strategic thinker who can align AI initiatives with business goals and drive innovation across the organization. Key Responsibilities: Architecture Design : Design and develop scalable, robust, and secure AI architectures for various applications, ensuring alignment with business objectives. Model Development : Lead the development and optimization of generative AI models, including but not limited to natural language processing (NLP), computer vision, and generative adversarial networks (GANs). Technology Strategy : Develop and implement AI strategies that leverage the latest advancements in generative AI technologies to drive business value. Project Management : Oversee AI projects from conception to deployment, ensuring timely delivery and integration with existing systems. Collaboration : Work closely with data scientists, engineers, and business stakeholders to identify AI opportunities and translate them into technical solutions. Research and Innovation : Stay up-to-date with the latest research and trends in generative AI, and apply new findings to enhance existing models and develop new applications. Mentorship : Provide guidance and mentorship to junior AI engineers and data scientists, fostering a culture of learning and innovation within the team. Performance Monitoring : Establish metrics and processes to monitor the performance and effectiveness of AI models, ensuring continuous improvement and optimization. Compliance and Ethics : Ensure AI solutions comply with relevant regulations and ethical standards, maintaining transparency and accountability in AI decision-making. Qualifications: Education : Master's or PhD in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Experience : Minimum of 12-16 years of experience in AI and machine learning, with at least 3 years in a leadership or architect role. Technical Skills : Proficiency in programming languages such as Python, Java Extensive experience with machine learning frameworks (Langchain,TensorFlow, PyTorch, Keras) and AI tools (OpenAI GPT, DALL-E, etc.). Strong understanding of deep learning techniques, including CNNs, RNNs, and GANs. Experience with cloud platforms (AWS, Azure, Google Cloud) and containerization technologies (Docker, Kubernetes). Analytical Skills : Strong problem-solving skills with the ability to analyze complex data sets and extract meaningful insights. Communication Skills : Excellent verbal and written communication skills, with the ability to convey technical concepts to non-technical stakeholders. Leadership Skills : Proven ability to lead and inspire a team, driving innovation and achieving business objectives. Show more Show less
Posted 5 days ago
2.0 years
0 Lacs
Panchkula, Haryana
On-site
Job Title: Java Developer Experience: Minimum 2 years Location: OnSite Employment Type: Full-time Job Overview: We are seeking a skilled Java Developer to join our team in building scalable and high-performance applications for the fleet management industry. The ideal candidate should have at least 2 years of experience in Java development, with expertise in Spring Boot, Microservices, Kafka Streams, and AWS. Key Responsibilities: Develop, deploy, and maintain microservices using Spring Boot. Design and implement Kafka Streams for real-time data processing. Optimize and manage PostgreSQL databases. Work with AWS services for cloud-based deployments and scalability. Collaborate with cross-functional teams to design, develop, and test features. Ensure system performance, reliability, and security best practices. Troubleshoot and resolve technical issues efficiently.Required Skills & Qualifications: 2+ years of experience in Java development. Strong knowledge of Spring Boot and Microservices architecture. Experience with Kafka Streams and real-time data processing. Proficiency in PostgreSQL, including writing optimized queries. Hands-on experience with AWS services (EC2, S3, Lambda, etc.). Familiarity with CI/CD pipelines and containerization (Docker/Kubernetes). Strong problem-solving and debugging skills. Good understanding of RESTful APIs and event-driven architectures. Job Type: Full-time Pay: ₹30,000.00 - ₹80,000.00 per month Benefits: Cell phone reimbursement Paid sick time Schedule: Day shift Monday to Friday Supplemental Pay: Yearly bonus Experience: 5G: 2 years (Preferred) Location: Panchkula, Haryana (Preferred) Work Location: In person
Posted 5 days ago
4.0 - 7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Role Expectations: Design, develop, and execute automated tests to ensure product quality in digital transformation initiatives. Collaborate with developers and business stakeholders to understand project requirements and define test strategies. Implement API testing using Mockito, Wiremock, and Stubs for effective validation of integrations. Utilize Kafka and MQ to test and monitor real-time data streaming scenarios. Perform automation testing using RestAssured, Selenium, and TestNG to ensure smooth delivery of applications. Leverage Splunk and AppDynamics for real-time monitoring, identifying bottlenecks, and diagnosing application issues. Create and maintain continuous integration/continuous deployment (CI/CD) pipelines using Gradle and Docker. Conduct performance testing using tools like Gatling and Jmeter to evaluate application performance and scalability. Participate in Test Management and Defect Management processes to track progress and issues effectively. Work closely with onshore teams and provide insights to enhance test coverage and overall quality. Qualifications: 4-7 years of relevant experience in QA automation and Java . Programming: Strong experience with Java 8 and above, including a deep understanding of the Streams API . Frameworks: Proficiency in SpringBoot and JUnit for developing and testing robust applications. API Testing: Advanced knowledge of RestAssured and Selenium for API and UI automation. Candidates must demonstrate hands-on expertise. CI/CD Tools: Solid understanding of Jenkins for continuous integration and deployment. Cloud Platforms: Working knowledge of AWS for cloud testing and deployment. Monitoring Tools: Familiarity with Splunk and AppDynamics for performance monitoring and troubleshooting. Defect Management: Practical experience with test management tools and defect tracking. Build & Deployment: Experience with Gradle for build automation and Docker for application containerization. SQL: Strong proficiency in SQL, including query writing and database operations for validating test results. Domain Knowledge: Prior experience in the Payments domain with a good understanding of the domain-specific workflows. Nice to Have: Data Streaming Tools: experience with Kafka (including basic queries and architecture) OR MQ for data streaming testing. Financial services or payments domain experience will be preferred. Frameworks: Experience with Apache Camel for message-based application integration. Performance Testing: Experience with Gatling and Jmeter for conducting load and performance testing. Show more Show less
Posted 5 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Cybage: Founded in 1995, Cybage Software Pvt. Ltd., a technology consulting organization is a leader in the hi-tech and outsourced product engineering space. We are a valued partner to technology startups, mid-size companies and Fortune 500 corporations alike. Our solutions are focused on modern technologies, and are enabled by a scientific, data-driven system called the Excel Shore Model of Operational Excellence. Python Developer at Cybage,! Mandatory Skills: Experience with Celery, utilizing web scrapers and associated libraries, Data management libraries AND utilizing async calls PostgreSQL or other TSQL based database (MySQL, MS SQL). Should have implemented Object oriented technologies, multi-tier application Experience defining and build unit tests Experience with logging Familiarity with writing and consuming APIs for integrating data from disparate sources Good to Have Skills: Hands-on experience with Docker Prior experience with FastAPI and Django Prior experience with Linux and ability to navigate the file system through SSH Prior experience with AWS S3 Experience with Redis Roles & Responsibilities: Develop and maintain scalable backend systems using Python, with asynchronous processing via Celery and web scraping solutions. Design, implement, and optimize relational database schemas using PostgreSQL or other TSQL-based databases like MySQL or MS SQL. Build, consume, and integrate RESTful APIs to enable smooth data flow between various internal and external systems. Apply object-oriented programming principles to develop reusable, modular, and well-structured code for multi-tier applications. Write and maintain unit tests to ensure high code quality, reliability, and performance throughout the development lifecycle. Implement effective logging mechanisms to monitor, debug, and analyze backend processes and services. Work with containerization tools like Docker for deployment and environment consistency (good to have). Utilize frameworks such as FastAPI or Django for rapid and secure API development (good to have). Operate in Linux environments using SSH and interact with AWS S3 and Redis for storage and caching needs (good to have). Collaborate with cross-functional teams and maintain clear technical documentation for backend services and APIs. Location and Work Mode: Project Location: Pune Work Mode: Hybrid Show more Show less
Posted 5 days ago
6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Summary We are seeking a dynamic and experienced Technical Project Lead with a strong foundation in .NET, Java, or Python. The ideal candidate will have a minimum of 6 years of software development experience. This individual will play a crucial role in leading projects, managing teams, and being the primary contact with clients. Key Responsibilities · Lead full-cycle project delivery from planning to deployment. · Collaborate closely with cross-functional teams, including developers, designers, and stakeholders, to define project scope, objectives, and deliverables. · Act as the main communication bridge between clients and internal teams. · Understand client requirements and convert them into actionable technical tasks. · Manage development teams and track team performance. · Coordinate Agile ceremonies: sprint planning, reviews, standups, retrospectives. · Proactively identify project risks, blockers, and propose mitigation strategies. · Maintain proper documentation and status updates for internal and external stakeholders. · Collaborate with frontend and backend developers to ensure alignment with architectural principles and design patterns. · Provide technical guidance and architectural support to resolve implementation challenges in UI and backend services. · Ensure the solution is scalable, secure, and performance-optimized across all layers of the tech stack. Required Skills & Qualifications · Strong understanding of database systems such as SQL Server, PostgreSQL, or MongoDB. · Experience with RESTful APIs and microservices architecture. · Bachelor’s or Master’s degree in Computer Science, IT, or a related field. · 6+ years of experience in backend software development using .NET, Java, or Python. · Proficiency in Agile/Scrum methodologies and hands-on experience with tools like Jira, Azure, Trello, Git, etc. · Excellent verbal and written communication skills for stakeholder interaction. · Exposure to cloud platforms such as AWS, Azure, or GCP. · Understanding of CI/CD pipelines, release management, and DevOps practices. · Familiarity with containerization and orchestration tools like Docker and Kubernetes. · Experience with performance tuning and optimization of applications. · Participate in hands-on coding when necessary. Soft Skills · Strong team leadership and decision-making abilities. · Ability to handle pressure and adapt to fast-changing environments. · Client-focused mindset with a problem-solving approach. Show more Show less
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Containerization has become a crucial aspect of modern software development, and the job market for containerization roles in India is thriving. Companies across various industries are increasingly adopting containerization technologies like Docker and Kubernetes, creating a high demand for skilled professionals in this field.
The average salary range for containerization professionals in India varies based on experience levels. Entry-level positions can start at around INR 5-8 lakhs per annum, while experienced professionals can earn upwards of INR 15-20 lakhs per annum.
In the containerization domain, a typical career path may involve starting as a Junior Developer, progressing to a Senior Developer, and eventually moving up to a Tech Lead role. Continuous learning and hands-on experience with containerization tools are key to advancing in this field.
Apart from proficiency in containerization technologies, professionals in this field are often expected to have strong skills in networking, cloud computing, automation, and security. Knowledge of scripting languages like Python or Shell scripting can also be beneficial.
As you explore opportunities in the containerization job market in India, remember to stay updated on the latest trends and technologies in this field. With the right skills and preparation, you can confidently pursue a rewarding career in containerization. Best of luck in your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2