Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Alight is hiring a Business Technical Analyst / Scrum Master to join our Retiree Health Solutions business unit . As part of an industry-leading team, you will help empower results for our clients by delivering innovative and effective solutions. The Business Technical Analyst / Scrum Master is responsible for working with a team and stakeholders to gather , captur e, and groom requirements, develop acceptance criteria, and assist u ser t esting to ensure a quality product is delivered for one or more produc t teams . Key Responsibilities Facilitate Agile ceremonies, including daily stand-ups, sprint planning, reviews, and retrospectives. Guide and coach the Scrum team on Agile principles and best practices. Remove impediments and blockers to ensure smooth project delivery. Collaborate with the Product Owner (s) to manage and prioritize the product backlog. Liaisons between Business Product Owner(s) and Technology Development team(s). Leads business requirement discussions based on priority set by the Product Owner; manage capture of materials and documentation as needed by the team to support successful delivery. Supports and coordinates User Acceptance testing. Foster a culture of continuous improvement by encouraging feedback and process refinement. Promote team accountability and ownership of deliverables. Ensure alignment with organizational goals and Agile standards. Documents requirements (user stories), functional and nonfunctional, for existing and new products, to pass on knowledge to the technology delivery team. Answers questions and works closely with the project team and business team throughout development. Applies and adapts agile values and principles with the team to improve workflow , identify lessons learned, evaluate completed tasks , and make process improvements based upon successful and unsuccessful project elements. Develop s best practices to share. Serves as an escalation point for issues requiring functional engagement. Required Skills And Qualifications Bachelor’s degree in computer science , Information Technology, Business or related field. Certified Scrum Master (CSM) or Professional Scrum Master (PSM) or SAFe Scrum Master certification (preferred) Proficiency in Agile tools such as Jira, Azure DevOps Understanding of software development lifecycle (SDLC) and DevOps Practices Strong understanding of Business Systems and Customer Relationship Management software, particularly Microsoft Dynamics CRM , preferred. Strong facilitation, conflict resolution and problem-solving skills Excellent communication and stakeholder management abilities. Ability to drive team collaboration and foster a high-performing Agile culture. Excellent analytical and critical thinking skills. Strong facilitation skills in leading planning meetings, reviews, and retrospectives. Good interpersonal skills and ability to work with diverse and remote teams. Ability to structure and communicate needs, requirements, and solutions in a business context for business stakeholders. Experience: 5+ years of experience as a Scrum Master in Agile environments for a product focused delivery team Proven track record of successfully delivering projects using Scrum methodologies. Experience working with cross-functional and distributed teams. Experience in Health Insurance Technology Solutions. Domain expertise in IT, software development, or relevant industry sectors. Experience in coordinating and managing teams to develop functional designs that satisfy the user requirements. Successful track record of delivering technology projects of very high complexity. We offer you a competitive total rewards package, continuing education & training, and tremendous potential with a growing worldwide organization. DISCLAIMER: Nothing in this job description restricts management's right to assign or reassign duties and responsibilities of this job to other entities; including but not limited to subsidiaries, partners, or purchasers of Alight business units.
Posted 22 hours ago
5.0 years
7 - 10 Lacs
Coimbatore, Tamil Nadu, India
On-site
About The Opportunity A leader in Enterprise Software & Technology Services focused on delivering bespoke SaaS and digital-transformation solutions to enterprises across domains. The organisation builds scalable, secure, and performant applications—blending cloud-native architecture, microservices, and strong engineering practices to drive business outcomes. Primary Title: Technical Lead Location: Coimbatore Role & Responsibilities Lead design and delivery of backend systems and microservices—own architecture decisions that balance scalability, security, and time-to-market. Write and review production-quality code; drive best practices in API design, data modelling, and service decomposition. Define and maintain CI/CD pipelines, automated testing, observability, and release processes to ensure high uptime and fast recovery. Collaborate with product, QA, and DevOps to translate requirements into technical specifications and realistic delivery plans. Coach and mentor engineers, run technical reviews, and establish engineering standards (code quality, performance, documentation). Engage with stakeholders and clients on technical trade-offs, estimations, and delivery risks; coordinate cross-functional teams for successful launches. Skills & Qualifications Must-Have 5+ years in software engineering with 3+ years in a lead/tech-lead role or similar responsibility. Strong backend development experience in Java, Node.js, or Python; solid understanding of RESTful APIs and service-oriented design. Hands-on experience with microservices, containerization (Docker), and orchestration (Kubernetes). Practical cloud experience (AWS, Azure, or GCP) and familiarity with cloud-native patterns (load balancing, autoscaling, storage). Proven ability to implement CI/CD, automated testing, and observability (metrics, tracing, logging) in production systems. Excellent problem-solving, system design, and stakeholder communication skills; willing to work on-site in India. Preferred Experience with event-driven architectures, message brokers (Kafka, RabbitMQ) and caching strategies (Redis). Knowledge of SQL and NoSQL databases and data modelling for scale (Postgres, MySQL, MongoDB, Cassandra). Exposure to frontend integration, security best practices, and performance tuning at scale. Benefits & Culture Highlights High-impact on product architecture with opportunities to shape engineering practices and mentor teams. Collaborative, outcome-driven environment that values code quality, continuous improvement, and clear ownership. On-site role offering close collaboration with product and client stakeholders—ideal for hands-on leaders who enjoy delivery accountability. We are looking for a pragmatic Technical Lead who combines deep engineering skills with strong people leadership to deliver reliable, scalable software. Apply if you enjoy solving complex systems problems and leading teams to operational excellence in an on-site, fast-paced environment. Skills: tech lead,project,management,artificial intelligence
Posted 22 hours ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Knowledge, Skills, And Abilities Ability to translate a logical data model into a relational or non-relational solution Expert in one or more of the following ETL tools: SSIS, Azure Data Factory, AWS Glue, Matillion, Talend, Informatica, Fivetran Hands on experience in setting up End to End cloud based data lakes Hands-on experience in database development using views, SQL scripts and transformations Ability to translate complex business problems into data-driven solutions Working knowledge of reporting tools like Power BI , Tableau etc Ability to identify data quality issues that could affect business outcomes Flexibility in working across different database technologies and propensity to learn new platforms on-the-fly Strong interpersonal skills Team player prepared to lead or support depending on situation"
Posted 22 hours ago
8.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About Us Innovation. Sustainability. Productivity. This is how we are Breaking New Ground in our mission to sustainably advance the noble work of farmers and builders everywhere. With a growing global population and increased demands on resources, our products are instrumental to feeding and sheltering the world. From developing products that run on alternative power to productivity-enhancing precision tech, we are delivering solutions that benefit people – and they are possible thanks to people like you. If the opportunity to build your skills as part of a collaborative, global team excites you, you’re in the right place. Grow a Career. Build a Future! Be part of this company at the forefront of agriculture and construction, that passionately innovates to drive customer efficiency and success. And we know innovation can’t happen without collaboration. So, everything we do at CNH Industrial is about reaching new heights as one team, always delivering for the good of our customers. Job Purpose To lead the design, development, and deployment of advanced AI/ML and Generative AI solutions, driving innovation and business value across the organization. This role serves as the technical and strategic leader responsible for shaping AI initiatives, ensuring scalable architecture, and aligning solutions with business objectives. The AI/ML/Gen AI Lead will also manage cross-functional collaboration, effectively communicating with stakeholders to translate complex technical concepts into actionable insights and drive adoption of AI technologies. Must have experience both POC and with Production grade solutions Key Responsibilities 🔹 Generative AI Leadership Architect And Deploy GenAI Solutions Such As Chatbots and conversational agents Intelligent document processing Code generation and copilots Content summarization, personalization, or generation Customize and fine-tune foundation models (e.g., GPT, LLaMA, Claude, Mistral) for domain-specific use cases. Drive evaluation and integration of GenAI frameworks and tooling (e.g., LangChain, Semantic Kernel, LlamaIndex, Transformers). Implement prompt engineering and retrieval-augmented generation (RAG) pipelines at scale. 🔹 Technical Strategy & Execution Define and execute the Generative AI roadmap aligned with business goals. Collaborate with product, engineering, and business stakeholders to identify and prioritize GenAI use cases. Lead POCs and pilots to validate ideas before full-scale implementation. Ensure robust, secure, and ethical deployment of GenAI systems, including governance and monitoring. 🔹 Team Leadership & Mentorship Lead, mentor, and grow a team of AI/ML engineers and researchers. Establish best practices in model development, experimentation, and deployment. Foster a culture of continuous innovation and learning in GenAI. 🔹 Platform & Infrastructure (Supporting Azure) Deploy and operationalize models using cloud platforms, ideally Azure AI services (OpenAI on Azure, Azure ML, Azure Cognitive Search). Manage GenAI infrastructure (e.g., vector databases, inference endpoints, GPUs) for performance and cost-efficiency. Utilize MLOps practices for model lifecycle management and reproducibility. Experience Required 8+ years of experience in AI/ML, with at least 2+ years in GenAI-specific roles. Proven experience with foundation models (e.g., GPT-4, Claude, LLaMA) and relevant toolsets. Proficiency in Python and AI/ML libraries (e.g., PyTorch, Transformers, Hugging Face). Strong understanding of prompt engineering, RAG, LLMOps, and LLM fine-tuning. Experience with vector databases (e.g., FAISS, Pinecone, Weaviate, Azure AI Search). Familiarity with enterprise AI integration (APIs, plugins, cloud deployment). Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Machine Learning, AI, or a related field (PhD preferred). What We Offer We offer dynamic career opportunities across an international landscape. As an equal opportunity employer, we are committed to delivering value for all our employees and fostering a culture of respect. Benefits At CNH, we understand that the best solutions come from the diverse experiences and skills of our people. Here, you will be empowered to grow your career, to follow your passion, and help build a better future. To support our employees, we offer regional comprehensive benefits, including: Flexible work arrangements Savings & Retirement benefits Tuition reimbursement Parental leave Adoption assistance Fertility & Family building support Employee Assistance Programs Charitable contribution matching and Volunteer Time Off Apply now Share This Job
Posted 22 hours ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Petrofac is a leading international service provider to the energy industry, with a diverse client portfolio including many of the world’s leading energy companies. We design, build, manage, and maintain infrastructure for our clients. We recruit, reward, and develop our people based on merit, regardless of race, nationality, religion, gender, age, sexual orientation, marital status, or disability. We value our people and treat everyone who works for or with Petrofac fairly and without discrimination. The world is re-thinking its energy supply and energy security needs and planning for a phased transition to alternative energy sources. We are here to help our clients meet these evolving energy needs. This is an exciting time to join us on this journey. Are you ready to bring the right energy to Petrofac and help us deliver a better future for everyone? JOB TITLE: Data Engineer Key Responsibilities Architecting and defining data flows for big data/data lake use cases. Excellent knowledge on implementing full life cycle of data management principles such as Data Governance, Architecture, Modelling, Storage, Security, Master data, and Quality. Act as a coach and provide consultancy services and advice to data engineers by offering technical guidance, and ensuring architecture principles, design standards and operational requirements are met. Participate in the Technical Design Authority forums. Collaborates with analytics and business stakeholders to improve data models that feed BI tools, increasing data accessibility, and fostering data-driven decision making across the organization. Work with team of data engineers to deliver the tasks and achieving weekly and monthly goals, also to guide the team to follow the best practices and improve the deliverables. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability. Responsible for estimating the cluster size, core size, monitoring, and troubleshooting of the data bricks cluster and analysis server to produce optimal capacity for computing data ingestion. Deliver master data cleansing and improvement efforts; including automated and cost-effective solutions for processing, cleansing, and verifying the integrity of data used for analysis. Expertise in securing the big data environment including encryption, tunnelling, access control, secure isolation. To guide and build highly efficient OLAP cubes using data modelling techniques to cater all the required business cases and mitigate the limitation of Power BI in analysis service. Deploy and maintain highly efficient CI/CD devops pipelines across multiple environments such as dev, stg and production. Strictly follow scrum based agile approach of development to work based on allocated stories. Comprehensive knowledge on data extraction, Transformation and loading data from various sources like Oracle, Hadoop HDFS, Flat files, JSON, Avro, Parquet and ORC. Experience defining, implementing, and maintaining a global data platform Experience building robust and impactful data visualisation solutions and gaining adoption Extensive work experience onboarding various data sources using real-time, batch load or scheduled loads. The sources can be in cloud, on premise, SQL DB, NO SQL DB or API-based. Expertise in extracting the data through JSON, ODATA, REST API, WEBSERVICES, XML. Expertise in data ingestion platforms such as Apache Sqoop, Apache Flume, Amazon kinesis, Fluent, Logstash etc. Hands on experience in using Databricks, Pig, SCALA, HIVE, Azure Data Factory, Python, R Operational experience with Big Data Technologies and Engines including Presto, Spark, Hive and Hadoop Environments Experience in various databases including Azure SQL DB, Oracle, MySQL, Cosmos DB, MongoDB Experience supporting and working with cross-functional teams in a dynamic environment. Essential Qualification & Skills Bachelor’s degree (masters’ preferred) in Computer Science, Engineering, or any other technology related field 10+ years of experience in data analytics platform and hands-on experience on ETL and ELT transformations with strong SQL programming knowledge. 5+ years of hands-on experience on big data engineering, distributed storage and processing massive data into data lake using Scala or Python. Proficient knowledge on Hadoop and Spark eco systems like HDFS, Hive, Sqoop, Oozie, Spark core, streaming. Experience with programming languages such as Scala, Java, Python and Shell scripting Proven Experience in pulling data through REST API, ODATA, XML,Web services. Experience with Azure product offerings and data platform. Experience in data modelling (data marts, snowflake/Star, Normalization, SCD2). Architect and defining the data flows and building highly efficient, scalable data pipelines. To work in tandem with the Enterprise and Domain Architects to understand the business goals and vision, and to contribute to the Enterprise Roadmaps. Strong troubleshooting skills, problem solving skills of any issues stopping business progress. Coordinate with multiple business stake holders to understand the requirement and deliver. Conducting a continuous audit of data management system performance, refine whenever required, and report immediately any breach or loopholes to the stakeholders. Allocate task to various team members, track the status and provide the report on activities to management. Understand the physical and logic plan of execution and optimize the performance of data pipelines. Extensive background in data mining and statistical analysis. Able to understand various data structures and common methods in data transformation. Ability to work with ETL tools with strong knowledge on ETL concepts. Strong focus on delivering outcomes. Data management: modelling, normalization, cleaning, and maintenance Understand Data architectures, Data warehousing principles and be able to participate in the design and development of conventional data warehouse solutions.
Posted 22 hours ago
0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Job Location Coimbatore, Onsite, Full time About Us VMax Health Tech is a dynamic and innovative startup on a mission to revolutionize the health and wellness industry. Our flagship product, FitMom Club, delivers personalized fitness and wellness solutions to mothers worldwide. We are a passionate team creating impactful digital experiences while fostering a culture of learning, creativity, and growth. Job Description We are looking for an enthusiastic Flutter Developer – Fresher to join our growing team. This role is p ideal for a passionate and quick learner who wants to build high-quality mobile applications and launch erfect for a passionate, quick learner who wants to build high-quality mobile applications and kickstart their career in a fast-paced startup environment. You will work closely with our senior developers to design, develop, and maintain user-friendly and visually appealing mobile apps. Role And Responsibilities Assist in designing and building mobile apps using Flutter. Work with senior developers to translate UI/UX designs and wireframes into responsive code. Learn to integrate APIs and third-party libraries. Support in testing, debugging, and improving application performance. Help maintain code quality, organization, and documentation. Collaborate with the team using Git for version control. Stay updated with the latest Flutter and mobile app development trends. Required Skills Basic knowledge of Flutter and Dart language. Understanding of mobile UI design principles (Material Design guidelines). Familiarity with API integration and local data storage. Willingness to learn Firebase services (Firestore, Push Notifications, etc.). Strong problem-solving skills and attention to detail. Good communication and teamwork skills. A passion for building user-friendly, quality applications. Bonus (Nice-to-have) Exposure to health & fitness apps. Knowledge of any cloud platform (AWS, Azure, Google Cloud). Culture & Inclusion At VMax Health Tech, we believe that diversity drives innovation. We are committed to creating an inclusive workplace where every team member feels valued, respected, and empowered to contribute their best. Why Join Us? Hands-on learning with real projects from day one. Mentorship from experienced developers. A collaborative and supportive work culture. Opportunities to grow and advance your career. Be part of a startup making a real difference in people’s health and wellness. Skills: flutter,mobile,design,code,cloud,developers,apps,health
Posted 22 hours ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
ASEC Engineers – A Verdantas Company is seeking a highly analytical and detail-oriented technical business analyst (comfortable working in the US Eastern Time Zone) with a strong focus on IT infrastructure to join our Global Infrastructure & Cloud Operations team. This role will be instrumental in documenting the current and future state of our IT environment, working closely with project managers, architects, and engineering teams to gather requirements, create system diagrams, and define operational processes. The ideal candidate will have a solid understanding of enterprise IT infrastructure, excellent communication skills, and a passion for translating complex technical environments into clear, actionable documentation. Key Responsibilities: A. Requirements Gathering & Analysis Collaborate with project managers, architects, and stakeholders to gather and analyze business and technical requirements. Conduct interviews, workshops, and document reviews to understand infrastructure needs and project goals. Translate business requirements into functional and technical specifications. This role requires a close alignment and collaboration with the US Eastern Time Zone. B. Documentation & Visualization Create and maintain detailed documentation of current and future state infrastructure, including: Network diagrams System architecture diagrams Data flow diagrams Process and workflow documentation Develop standard operating procedures (SOPs), runbooks, and knowledge base articles. Ensure documentation is version-controlled, accessible, and aligned with organizational standards. C. Project Support Support infrastructure and cloud-related projects by providing clear documentation and analysis. Assist in defining project scope, objectives, and deliverables from a technical documentation perspective. Participate in project meetings and provide updates on documentation progress and gaps. D. Collaboration & Communication Act as a liaison between technical teams and business stakeholders. Facilitate communication across cross-functional teams to ensure alignment on infrastructure changes and documentation needs. Present findings and documentation to technical and non-technical audiences. E. Quality Assurance & Compliance Ensure documentation meets internal quality standards and compliance requirements. Support audits and risk assessments by providing accurate and up-to-date documentation. Identify opportunities for process improvement and standardization. Qualifications: A. Required: Bachelor’s degree in information technology, computer science, or a related field. 5+ years of experience as a technical business analyst, systems analyst, or infrastructure documentation role. Strong understanding of IT infrastructure components (servers, storage, networking, cloud, and virtualization). Proficiency with diagramming and documentation tools (e.g., Microsoft Visio, Lucidchart, Draw.io, and Confluence). Excellent written and verbal communication skills. Strong analytical and problem-solving abilities. B. Preferred: Experience with cloud platforms (Azure, AWS, GCP). Familiarity with ITIL, COBIT, or other IT governance frameworks. Experience working in Agile or hybrid project environments. Knowledge of enterprise architecture frameworks (e.g., TOGAF). Key Competencies: Attention to detail Technical curiosity Stakeholder management Process orientation Adaptability and initiative Ready to Build the Future with Us? “ Join us at ASEC Engineers, a Verdantas Company , and make a meaningful impact—professionally and environmentally. Be part of a visionary team driving innovation, sustainability, and transformative solutions that shape the future .”
Posted 22 hours ago
10.0 years
0 Lacs
Mohali district, India
On-site
Job Title: Digital Security Engineer / Lead Experience Required: 9–10+ Years Location : Mohali (work from office) Employment Type: Full-Time Position Overview We are seeking a highly skilled and motivated Digital Security Engineer/Lead to define and implement our security strategy for digital assets. The ideal candidate will have extensive hands-on experience with cloud-native web application firewalls, cloud security platforms, and application gateway management, coupled with strong leadership and stakeholder management skills. This role involves working with global e-commerce platforms, mentoring team members, and ensuring best-in-class digital security practices. Key Responsibilities Security Strategy & Implementation Define and execute the security strategy for all digital assets. Deploy, configure, and maintain cloud-native Web Application Firewalls (WAF) across major cloud providers (AWS, Azure, GCP). Implement comprehensive WAF event logging and incident response processes. Update threat models based on WAF event patterns and emerging risks. Develop, maintain, test, and troubleshoot WAF rulesets and configurations. Cloud & Application Security Hands-on experience with Azure Cloud , Akamai , and Application Gateway (mandatory). Design, optimize, and secure infrastructure for web applications in cloud environments. Monitor system activities, fine-tune parameters, and ensure optimal performance and security. Evaluate existing solutions, provide recommendations, and engage with application development teams on infrastructure and security initiatives. Leadership & Collaboration Partner with stakeholders and end users to translate high-level specifications into secure application solutions. Mentor junior engineers, ensuring adherence to development and security best practices. Communicate effectively with teams and leadership, aligning on strategy, priorities, and results. Participate in project planning, reporting, and execution across multiple initiatives. Security Operations & Monitoring Oversee the design, implementation, and optimization of Security Information and Event Management (SIEM) solutions. Research and recommend best-fit infrastructure, network, database, and security architectures. Create and maintain tools for continuous monitoring and proactive threat detection. (Plus) Experience working on Privileged Access Management (PAM) solutions. Qualifications & Skills 9–10+ years of experience in digital security engineering with leadership responsibilities. Proven expertise in Web Application Firewall (WAF) deployment, configuration, and management across AWS, Azure, and GCP. Strong knowledge of Azure Cloud , Akamai , and Application Gateway (hands-on). Solid understanding of SIEM solutions and incident response frameworks. Experience in infrastructure, network, database, and application security design. Strong analytical, problem-solving, and communication skills. Ability to mentor and lead technical teams while collaborating with cross-functional stakeholders.
Posted 22 hours ago
8.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
We are seeking a Senior Full Stack Engineer with a strong backend focus and proven expertise in the MERN stack to take complete ownership of system architecture and development. This role demands a hands-on leader capable of delivering scalable, high-performance applications end-to-end. Key Responsibilities Design and implement end-to-end architecture for complex web applications. Lead backend development using Node.js and Express.js, with strong MongoDB optimization skills. Develop responsive, maintainable frontends using React.js. Define coding best practices, conduct reviews, and mentor engineers. Collaborate cross-functionally to ensure quality and timely delivery. Requirements 6–8+ years professional experience (less than 6 years not eligible). Strong expertise in MERN stack (MongoDB, Express.js, React.js, Node.js). Deep understanding of system architecture, API design, and database optimization. Hands-on cloud experience (AWS/GCP/Azure preferred). Strong GitHub portfolio demonstrating quality work—mandatory. Must be able to join within 30 days (highly preference)
Posted 22 hours ago
125.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Summary MMS Infusion is looking for a Product Owner who has passion for learning, developing high quality, customer focused and impactful SW products. This position will be responsible for developing a deep technical understanding of product/systems while interacting with other Teams and Architect, evaluating their systems, and designing product integrations. This is a hands-on software development position. Lead by example, adopting a "whatever it takes" approach to get the job done, while nurturing a sense of camaraderie, success, and appreciation among the teams. The ideal candidate will bring energy, creativity, and collaboration to our organization. Job Description About BD: BD is one of the largest global medical technology companies in the world and is advancing the world of health by improving medical discovery, diagnostics, and the delivery of care. The company develops innovative technology, services and solutions that help advance both clinical therapy for patients and clinical process for health care providers. BD has 70,000 employees and a presence in virtually every country around the world to address some of the most challenging global health issues. About BD TCI: BD, a 125-year-old global medical device company has started its Research and Development Organization in Bangalore India which is BD Technology Campus India (TCI). This R&D center will be an integral part of the global R&D, in design related activities and full product life cycle management. This R&D organization will have highly skilled associates in the field of engineering and science. Position Summary: MMS Infusion is looking for a Product Owner who has passion for learning, developing high quality, customer focused and impactful SW products. This position will be responsible for developing a deep technical understanding of product/systems while interacting with other Teams and Architect, evaluating their systems, and designing product integrations. This is a hands-on software development position. Lead by example, adopting a "whatever it takes" approach to get the job done, while nurturing a sense of camaraderie, success, and appreciation among the teams. The ideal candidate will bring energy, creativity, and collaboration to our organization. Job Responsibilities: Drive discussions with technical stakeholders to uncover strategic needs and align on key technologies. Establish systems and solution design in large, sophisticated, data-intensive systems. Partner with external teams, evaluate architectures, and make recommendations on implementations and improvements. Drive architecture and software patterns and standard methodologies across the department and provide technical guidance and mentorship to application development teams. Develop and maintain comprehensive view of current and future state architecture that align with the business strategy. Act as an individual contributor (Hands-on Developer) to implement designs with application teams. Conduct POCs on new technologies and build vs buy analysis to determine suitability of tech stack expansion. Education and Experience: Minimum BS/BE engineering, or relevant field. Minimum 10+ years of experience in various elements of software testing and automation. Knowledge and Skills: Experience creating user stories, requirements, and acceptance criteria. Strong collaboration and communication skills. Collaborate closely with designers and engineers to create effective solutions and then work together to deliver those solutions to the market. Work with multiple internal and external stakeholders and customers to elicit requirements and understand their needs. Bring to the team a solid knowledge of the various constraints of the business: marketing, sales, service, finance, legal, and security. Contribute to the team a deep knowledge of our users and customers. Be aware of and follow industry trends as they pertain to the product. Influence outcomes through your use of data and logic. Define key success metrics. Measure, adjust, and iterate. Create, refine, and drive the prioritization of the Platform backlog. Identify and coordinate intra-team dependencies. Knowledge of software systems development and architecture best practices and patterns. Broad exposure to system integrations, integration patterns and standard methodologies Knowledge of distributed systems principles, design, and architecture. Knowledge of .Net Platform, Databases (at-least one – My SSQL, MS SQL, Azure SQL), Testing Tools (at-least one - Selenium, Cypress) Knowledge and experience using Agile/Scrum/Kanban and associated tools. Good to have. Previous experience building platform capabilities as products is a big differentiator for this role. Previous experience in healthcare IT is a great plus but not required. Healthcare/ Regulated industry Experience Experience in multiple stacks is a major plus. Experience working on at least one cloud provider. Must have experience working with global SW teams Required Skills Optional Skills Primary Work Location IND Bengaluru - Technology Campus Additional Locations Work Shift
Posted 22 hours ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Responsibilities: 1. Architect and develop scalable AI applications focused on indexing, retrieval systems, and distributed data processing. 2. Collaborate closely with framework engineering, data science, and full-stack teams to deliver an integrated developer experience for building next-generation context-aware applications (i.e., Retrieval-Augmented Generation (RAG)). 3. Design, build, and maintain scalable infrastructure for high-performance indexing, search engines, and vector databases (e.g., Pinecone, Weaviate, FAISS). 4. Implement and optimize large-scale ETL pipelines, ensuring efficient data ingestion, transformation, and indexing workflows. 5. Lead the development of end-to-end indexing pipelines, from data ingestion to API delivery, supporting millions of data points. 6. Deploy and manage containerized services (Docker, Kubernetes) on cloud platforms (AWS, Azure, GCP) via infrastructure-as-code (e.g., Terraform, Pulumi). 7. Collaborate on building and enhancing user-facing APIs that provide developers with advanced data retrieval capabilities. 8. Focus on creating high-performance systems that scale effortlessly, ensuring optimal performance in production environments with massive datasets. 9. Stay updated on the latest advancements in LLMs, indexing techniques, and cloud technologies to integrate them into cutting-edge applications. 10. Drive ML and AI best practices across the organization to ensure scalable, maintainable, and secure AI infrastructure. Qualifications: Educational Background: Bachelor's or Master’s degree in Computer Science, Data Science, Artificial Intelligence, Machine Learning, or a related field. PhD preferred. Certifications in Cloud Computing (AWS, Azure, GCP) and ML technologies are a plus. Technical Skills: 1. Expertise in Python and related frameworks (Pydantic, FastAPI, Poetry, etc.) for building scalable AI/ML solutions. 2. Proven experience with indexing technologies: Building, managing, and optimizing vector databases (Pinecone, FAISS, Weaviate) and search engines (Elasticsearch, OpenSearch). 3. Machine Learning/AI Development: Hands-on experience with ML frameworks (e.g., PyTorch, TensorFlow) and fine-tuning LLMs for retrieval-based tasks. 4. Cloud Services & Infrastructure: Deep expertise in architecting and deploying scalable, containerized AI/ML services on cloud platforms using Docker, Kubernetes, and infrastructure-as-code tools like Terraform or Pulumi. 5. Data Engineering: Strong understanding of ETL pipelines, distributed data processing (e.g., Apache Spark, Dask), and data orchestration frameworks (e.g., Apache Airflow, Prefect). 6. APIs Development: Skilled in designing and building RESTful APIs with a focus on user-facing services and seamless integration for developers. 7. Full Stack Engineering: Knowledge of front-end/back-end interactions and how AI models interact with user interfaces. 8. DevOps & MLOps: Experience with CI/CD pipelines, version control (Git), model monitoring, and logging in production environments. Experience with LLMOps tools (Langsmith, MLflow) is a plus. 9. Data Storage: Experience with SQL and NoSQL databases, distributed storage systems, and cloud-native data storage solutions (S3, Google Cloud Storage).
Posted 22 hours ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Us: At Recro, we connect top tech talent with innovative companies. We are hiring for our client, a product-based company in the retail domain, seeking a Data Scientist who can apply scientific thinking to real-world business problems and drive impactful solutions. Key Responsibilities Investigate the feasibility of applying scientific principles and concepts to solve complex business challenges. Independently design and develop new algorithms for well-defined retail problems, and optimize existing algorithms for better performance. Collaborate with the product team to implement new modules, maintain production pipelines, and ensure timely releases. Build Proof of Concepts (POCs) using new ideas or technologies, and promote innovative solutions with internal and external stakeholders. Work closely with Data Engineering, Advisory, and Product teams to identify improvement areas and drive process efficiency. Mandatory Skills & Experience Retail Experience & Business Acumen – At least one retail project experience, even if it’s 30–50% of your overall work experience. Machine Learning – End-to-end development of at least one ML model independently. Programming & Tools – Strong hands-on experience with SQL and Python (PySpark is a plus). Preferred Qualifications Experience with large-scale data processing frameworks. Exposure to cloud platforms (AWS, Azure, or GCP). Strong analytical and problem-solving skills. Why Join? Opportunity to work on impactful projects in the retail analytics domain. Collaborative work culture with innovative and agile teams. Scope to experiment, innovate, and bring ideas to life.
Posted 22 hours ago
15.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Delivery Head – IT Infrastructure Services Experience: 15+ years in IT Infrastructure Delivery, P&L Management, and Client Engagement Job Summary The Delivery Head (IT Infrastructure) will lead the end-to-end delivery of IT infrastructure services (Cloud, Data Center, Network, Security, etc.), ensuring revenue growth, profitability, and client satisfaction. This role requires a strong background in infrastructure project delivery , contract management, and C-level stakeholder engagement. Key Responsibilities 1. Revenue Growth & Account Expansion Drive YoY revenue growth through strategic account planning and expansion of infrastructure services (Cloud Migration, Managed Services, DevOps, etc.). Identify upsell/cross-sell opportunities (e.g., hybrid cloud, cybersecurity) and lead proposal efforts. 2. Profit & Margin Management Own gross margin targets for fixed-bid and managed infrastructure contracts. Optimize delivery costs (resource utilization, vendor negotiations, automation). 3. Budget & Contract Oversight Ensure adherence to fixed budgets for infrastructure projects (SOWs, MSAs, POs). Manage contract renewals , amendments, and compliance (SLAs, KPIs). 4. Client Relationship & Governance Lead C-level stakeholder engagement (CIOs, CTOs) and conduct Quarterly Business Reviews (QBRs). Improve Net Promoter Score (NPS) by addressing client pain points proactively. 5. Delivery Excellence Oversee infrastructure service delivery (ITIL, Agile, DevOps) with a focus on quality, scalability, and security . Resolve critical escalations (e.g., outages, security breaches) with minimal client impact. 6. Team Leadership Manage onsite/offshore delivery teams (Infrastructure Architects, Cloud Engineers, Network Specialists). Foster innovation (AIOps, automation) and skill development. Qualifications & Skills Must-Have: 15+ years in IT Infrastructure Services (Data Center, Cloud, Network, Security). Proven track record in P&L management ($10M+ accounts) and margin optimization . Expertise in infrastructure contracts (SOWs, MSAs) and governance frameworks (ITIL, ISO 27001). Experience working with C-level stakeholders (CIOs, CISOs). Preferred: Certifications: ITIL, PMP, AWS/Azure Cloud, CISSP . Background in managed services and transition/migration projects .
Posted 22 hours ago
5.0 years
1 - 20 Lacs
Pune, Maharashtra, India
On-site
Job Title: Generative AI Developer Experience Required: 5+ years Location: Pune Employment Type: Full-time Job Summary We are seeking an experienced Generative AI Developer with a strong background in Python, modern web frameworks, and advanced AI concepts such as LLMs and RAG pipelines. The ideal candidate will be responsible for building, deploying, and optimizing AI-driven solutions that integrate cutting-edge generative models into production systems. Responsibilities Design, develop, and deploy scalable AI-powered applications. Implement and fine-tune LLMs for specific business use cases. Build and optimize RAG pipelines to improve accuracy and relevance of AI responses. Develop robust APIs using Django or FastAPI for AI model integration. Work with cross-functional teams to define requirements and deliver high-quality solutions. Stay updated with the latest advancements in generative AI and machine learning. Qualifications Bachelor’s or Master’s in Computer Science, AI, Data Science, or a related field. Proven 5+ years in Python development with production-grade applications. Hands-on experience with Django or FastAPI. Solid knowledge of LLMs (e.g., OpenAI GPT, LLaMA, Falcon, Mistral). Experience in designing RAG systems using vector databases (e.g., Pinecone, FAISS, Weaviate). Strong understanding of prompt engineering, fine-tuning, and model evaluation. Preferred Experience deploying AI applications in cloud environments (AWS, Azure, GCP). Familiarity with LangChain, Haystack, or similar AI orchestration frameworks. Understanding of MLOps for continuous deployment of AI models. Skills: django,python,fastapi,genai,llm,reg
Posted 22 hours ago
12.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Data Architecture and Engineering Lead Job location: Ahmedabad (full-time) Responsibilities: Lead Data Architecture: Own the design, evolution, and delivery of enterprise data architecture across cloud and hybrid environments. Develop relational and analytical data models (conceptual, logical, and physical) to support business needs and ensure data integrity. Consolidate Core Systems: Unify data sources across airport systems into a single analytical platform optimized for business value. Build Scalable Infrastructure: Architect cloud-native solutions that support both batch and streaming data workflows using tools like Databricks, Kafka, etc. Implement Microservice Architecture Implement Governance Frameworks: Define and enforce enterprise-wide data standards for access control, privacy, quality, security, and lineage. Data Modeling Enable Metadata & Cataloguing: Deploy metadata management and cataloguing tools to enhance data discoverability and self-service analytics. Operationalize AI/ML Pipelines: Lead data architecture that supports AI/ML initiatives, including forecasting, pricing models, and personalization. Partner Across Functions: Translate business needs into data architecture solutions by collaborating with leaders in Operations, Finance, HR, Legal, and Technology. Optimize Cloud Cost & Performance: Roll out compute and storage systems that balance cost efficiency, performance, and observability across platforms. Qualifications: 12+ years of experience in data architecture, with 3+ years in a senior or leadership role across cloud or hybrid environments Proven ability to design and scale large data platforms supporting analytics, real-time reporting, and AI/ML use cases Hands-on expertise with ingestion, transformation, and orchestration pipelines Extensive experience with Microsoft Azure data services, including Azure Data Lake Storage, Azure Databricks, Azure Data Factory, and related technologies. Strong knowledge of ERP data models, especially SAP and MS Dynamics Experience with data governance, compliance (GDPR/CCPA), metadata cataloguing, and security practices Familiarity with distributed systems and streaming frameworks like Spark or Flink Strong stakeholder management and communication skills, with the ability to influence both technical and business teams Tools & Technologies Warehousing: Azure Databricks Delta, BigQuery Big Data: Apache Spark Cloud Platforms: Azure (ADLS, AKS, EventHub, ServiceBus) Streaming: Kafka, Pub/Sub RDBMS: PostgreSQL, MS SQL, Oracle MongoDB, Hadoop, ClickHouse Monitoring: Azure Monitoring, App Insight, Prometheus, Grafana
Posted 22 hours ago
6.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Duration: FULL TIME Cloud Engineer: 6-8 Years Location: Hyderabad This position will be responsible for daily administration and operation of the data and voice network, servers, virtual computing platforms (Nutanix), and other hardware supporting applications used by RxBenefits employees and clients. He/She is responsible for implementing and supporting existing network and infrastructure technology solutions within the organization and analyzing, evaluating, and implementing additional solutions productively and effectively. Primary Duties & Responsibilities: Responsible for the day-to-day administration of the onsite Nutanix infrastructure including configuration, upgrades, backups, and patching Responsible for maintaining network infrastructure, including switches (Extreme), firewalls and wireless environments (firewalls and wireless platforms - Fortinet) Responsible for onsite network services in Birmingham (DNS, DHCP, SNMP, etc.) Responsible for the design, configuration, and tuning of application and server monitoring systems (Zabbix) Responsible for responding to systems alerts, investigates, and resolves related system failures, and recovers/restores failed systems and applications Acts as an escalation point for advanced end-user and PC support issues Work closely with Cloud II and III team members to assist with the following environments Assists with the configuration of local Active Directory and Azure related integration Assists in the administration of Office 365 environment Assists with the administration of our cloud infrastructure (AWS and Azure) Assists with support of 3rd party hosted applications Assists with development of operational documentation Assists with creating, maintaining, and executing disaster recovery and high availability plans Required Knowledge, Skills and Abilities: A minimum five (5) years experience in IT. Knowledge of Windows and Linux technologies. Hands on experience with Windows and Linux servers Hands on experience with switches, routers, and firewall maintenance Knowledge of advanced network topology design and administration including LAN, WAN, Security, and related technologies. Knowledge and experience with data backup and recovery processes. Knowledge of virtual computing infrastructures (i.e. – Nutanix) Ability to work independently and manage multiple projects and processes to achieve commitments. Excellent interpersonal and communication (verbal and written) skills to all levels of the organization. Process and technical documentation skills. Skilled in organizing and communicating technology systems and services use to others. Strong analytical and problem-solving skills. General Technology and Application Support skills. Ability to deliver on objectives. Customer Service skills.
Posted 22 hours ago
5.0 years
0 Lacs
India
Remote
100% Remote Role Permanent position Job Title: Azure Admin with Azure Key Vault experience Shift timings: 6:30 PM to 3:30 AM (night shift) Salary range: 12-16 LPA fixed (as per your experience) Required: At least 5+ years of experience for Azure administrator / Azure admin with Key vault . Good experience in access management of resources through creation and management of credentials using Azure Key Vault and SAS tokens. Good experience in deploy and configure various cloud resources (e.g., virtual machines, storage accounts, Key Vault) Hands-on experience in providing technical support for cloud-based infrastructure and applications. Strong communication skills are essential, as the interview will be conducted by a U.S.-based interviewer Qualifications: Familiarity with various operating systems (Windows, Linux). Strong understanding of cloud computing concepts such as IaaS, PaaS, SaaS, virtualization, containerization Understanding of network protocols (TCP/IP, HTTP, DNS), subnetting, and network security concepts. Experience with scripting languages like PowerShell, or Bash for automating cloud tasks and infrastructure management. Knowledge of blob storage, Azure Storage Account and S3 and means of transferring data. Strong analytical and problem-solving skills. Ability to work collaboratively in a team environment. Eagerness to learn and adapt to new technologies and security challenges. Responsibilities Primary responsibility will be related to access management of resources through creation and management of credentials using Azure Key Vault and SAS tokens. Create and manage subscriptions, resource groups, and virtual networks (VPCs) in Azure and AWS Deploy and configure various cloud resources (e.g., virtual machines, storage accounts, Key Vault) Configure access policies and permissions, Microsoft Entra and Azure IAM. Create and maintain clear and concise documentation for cloud infrastructure and processes. Position Overview: The staff member will gain valuable hands-on experience in providing technical support for cloud-based infrastructure and applications. This role will serve as a stepping stone for individuals interested in pursuing a career in cloud computing and IT support. Preferred Qualifications: Familiarity with cloud security platforms (AWS, Azure, GCP) is a plus. Industry certifications (CompTIA Cloud+, Azure Administrator Associate, etc.) are a plus but not required. Experience with automation (Terraform, Bicep). Basic knowledge of Cybersecurity principles and practices, including networking, firewalls, encryption, endpoint security, MFA and vulnerability management. What Job Offers: Hands-on experience with industry-leading Cloud platforms. Mentorship from experienced Cloud and IT professionals. Opportunities to work on meaningful projects with direct impact. A collaborative and supportive work environment.
Posted 22 hours ago
3.0 years
0 Lacs
India
Remote
Job Title: n8n Automation Engineer Location: Remote Employment Type: Full-Time Role Overview: We are looking for a skilled n8n Automation Engineer to design, develop, and maintain automated workflows using n8n , integrating multiple APIs, data sources, and business applications. The ideal candidate will have a strong background in automation platforms, API integration, and problem-solving. Key Responsibilities: Design, build, and maintain complex workflows using n8n . Integrate multiple APIs, webhooks, and third-party services. Troubleshoot and optimize existing automation workflows for efficiency and reliability. Collaborate with cross-functional teams to identify automation opportunities. Maintain documentation for automation processes and integrations. Ensure workflows are secure, scalable, and follow best practices. Requirements: Proven experience with n8n (minimum 3 year). Strong knowledge of REST APIs , webhooks , and JSON data handling. Proficiency in JavaScript (Node.js experience is a plus). Familiarity with databases (MySQL, PostgreSQL, MongoDB). Understanding of authentication methods (OAuth2, API keys, JWT). Problem-solving mindset and ability to work independently. Good communication skills for working in a collaborative environment . Preferred Skills: Experience with other automation tools (Zapier, Make/Integromat, Airflow). Knowledge of cloud platforms (AWS, Azure, GCP). Experience in workflow optimization and error handling. How to Apply: Send your CV, portfolio (if applicable), and examples of workflows you’ve built in n8n to hr@itbutler.sa with the subject "n8n Automation Engineer Application" .
Posted 22 hours ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About the Role We are looking for enthusiastic and self-motivated AI/ML Trainees who are passionate about artificial intelligence, data science, and machine learning. As a trainee, you will work closely with our data science and engineering teams to build, train, test, and deploy machine learning models that solve real-world problems. Key Responsibilities Assist in data collection, preprocessing, cleaning, and exploration. Support in building and training machine learning and deep learning models. Conduct literature reviews and research to stay updated on recent developments in AI/ML. Collaborate with software engineers and data scientists to integrate models into production systems. Document your work and present findings to the team. Required Skills Strong understanding of Python and its ML libraries (NumPy, pandas, scikit-learn, etc.) Basic knowledge of machine learning concepts such as supervised/unsupervised learning, classification, regression, clustering. Exposure to deep learning frameworks like TensorFlow, PyTorch, or Keras (preferred but not mandatory). Understanding of data preprocessing, feature engineering, and model evaluation techniques. Good analytical and problem-solving skills. Effective communication and a team-oriented mindset. Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, or related field (ongoing or completed). Academic or personal projects in AI/ML (GitHub links, Kaggle profiles are a plus). Familiarity with cloud platforms (AWS, GCP, Azure) is a bonus.
Posted 22 hours ago
3.0 years
0 Lacs
India
Remote
In Norconsulting we are currently looking for a Data Engineer to join us in a freelancer opportunity for a major Banking organization. Duration : long term Location : Remote from India Rate : 105 USD per day (Around 2100 per month) The main scope of work is Data integration, Data cleanup for Knowledge 360 and other related products for knowledge system. Related task includes, but not limited to: Design and implement cloud-based solutions using Azure and Google Cloud platforms. Automate data pipelines to facilitate efficient data flow and processing. Develop and manage data pipelines using technologies like PySpark and Databricks. Ensure the scalability and reliability of data processing workflows. Work with Large Language Models (LLMs) such as GPT and OpenAI models to enhance applications with natural language processing (NLP) capabilities. Design and implement prompt engineering strategies to optimize model performance in business applications. Train, fine-tune, and deploy AI models, analyzing their performance based on real-world results. Optimize models using feedback mechanisms to improve efficiency Utilize strong communication skills to convey complex technical concepts to non-technical stakeholders SKILLS / EXPERIENCE REQUIRED 3+ years of hands-on experience in Python, Pyspark, Databricks, and cloud platforms like Azure and Google Cloud. Experience in designing and implementing cloud-based solutions and automating data pipelines. Experience in Working with Large Language Models (LLMs) such as GPT, OpenAI models, or enhance solutions with natural language processing (NLP) capabilities. Proven ability in metadata management and schema mapping. Experience working directly with clients, understanding their requirements, and delivering tailored solutions. Strong communication skills to effectively convey technical concepts to non-technical stakeholders. Excellent analytical and problem-solving skills with a keen attention to detail with respect to Data. Ability to work independently and in collaboration with cross-functional teams. Track record of successfully managing multiple tasks and projects in a fast-paced environment. Experience in end-to-end project delivery and managing deliverables. Cloud certifications (Azure, Google Cloud, or similar) are a plus.
Posted 22 hours ago
5.0 years
0 Lacs
India
Remote
Job Title: Senior DevOps Engineer – Kubernetes Job Type: Contract Duration:-6-12 months Work mode:- Remote Overview Our client is seeking an experienced Azure/AWS DevOps Engineer to join their expanding technology team. This role is ideal for a software engineering professional with a passion for automation and a strong drive to enhance development, deployment, and monitoring processes. The successful candidate will bring deep expertise in DevOps best practices , Kubernetes , Istio Service Mesh , CI/CD automation, and cloud infrastructure, along with a collaborative mindset and problem-solving skills. Key Responsibilities Design, implement, and maintain CI/CD pipelines using GitHub Actions , Azure DevOps (AZDO) , and Terraform . Manage and optimize Kubernetes clusters and Istio Service Mesh for scalable and secure application deployment. Automate infrastructure provisioning, configuration, and monitoring across Azure and AWS environments. Collaborate with software development teams to streamline build, deployment, and release processes. Implement configuration management solutions using tools like Chef or Ansible . Monitor application and infrastructure performance, troubleshoot issues, and ensure high availability. Required Qualifications Bachelor’s degree in Computer Science or a related field. 5+ years of experience in a DevOps-related role. Proven expertise with Kubernetes , Istio Service Mesh , Terraform , and Azure/AWS environments. Strong proficiency in scripting languages such as Bash , Python , Node.js , and PowerShell . Hands-on experience with configuration management tools (Chef, Ansible). Excellent communication, collaboration, and problem-solving skills. Ability to work effectively in a fast-paced, dynamic environment .
Posted 22 hours ago
5.0 years
0 Lacs
Coimbatore, Tamil Nadu
On-site
Senior Node.js Developer – Coimbatore (Onsite) Company: Ad Hash Technolabs Pvt Ltd Location: Coimbatore, Tamil Nadu Employment Type: Full-time Experience Required: Minimum 5 Years Salary Range: Competitive (based on experience) About the Role We are hiring a Senior Node.js Developer with strong full stack development expertise in Node.js, Angular (v10+), and MongoDB . This is a full-time onsite role in Coimbatore where you will build scalable applications, robust RESTful APIs, and dynamic front-end interfaces. You will collaborate with cross-functional teams and mentor junior developers while ensuring high standards in security, performance, and scalability. Key Responsibilities Develop, maintain, and optimize RESTful APIs using Node.js and Express.js . Build dynamic, responsive front-end applications using Angular (latest version) , RxJS , and Angular Material . Perform MongoDB data modeling, indexing, aggregation, and performance tuning. Integrate third-party APIs and microservices into existing platforms. Ensure application security , scalability, and optimized performance. Collaborate closely with UI/UX designers , DevOps , and QA engineers . Participate in code reviews , enforce coding best practices, and mentor junior developers. Deploy and maintain applications on cloud (AWS/Azure) or on-premise environments. Required Skills & Qualifications Backend: Node.js, Express.js, TypeScript (preferred). Frontend: Angular (v10+), RxJS, Angular Material. Database: MongoDB with Mongoose ODM (preferred). Version Control: Git, GitHub/GitLab. API Development: REST (GraphQL experience is a plus). Authentication: JWT, OAuth. Cloud: Basic AWS/Azure knowledge (preferred). Education: Bachelor’s degree in Computer Science, Engineering, or related field. Experience: Minimum 5+ years in full stack development. Strong problem-solving, debugging, and team collaboration skills. Why Work With Us? Work on high-impact, enterprise-grade web applications . Leadership opportunities and scope to mentor junior developers. Collaborative and growth-oriented work culture. Competitive salary and benefits package. How to Apply: Click “Apply” on Indeed or send your resume to hr@adhashtech.com . SEO Keywords: Senior Node.js Developer Coimbatore, Node.js Full Stack Developer Jobs, Angular Developer Coimbatore, MongoDB Developer Jobs Tamil Nadu, REST API Developer Jobs, Full-time Onsite IT Jobs Coimbatore, Express.js Developer Careers. Job Type: Full-time Pay: Up to ₹1,200,000.00 per year Benefits: Health insurance Paid sick time Work Location: In person
Posted 22 hours ago
10.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Summary We are seeking an experienced DevOps Architect to drive the design, implementation, and management of scalable, secure, and highly available infrastructure. The ideal candidate should have deep expertise in DevOps practices, CI/CD pipelines, cloud platforms, and infrastructure automation across multiple cloud environments along with strong leadership and mentoring Duties and Responsibilities : Lead and manage the DevOps team to ensure reliable infrastructure and automated deployment processes. Design, implement, and maintain highly available, scalable, and secure cloud infrastructure (AWS, Azure, GCP, etc. Develop and optimize CI/CD pipelines for multiple applications and environments. Drive Infrastructure as Code (IaC) practices using tools like Terraform, CloudFormation, or Ansible. Oversee monitoring, logging, and alerting solutions to ensure system health and performance. Collaborate with Development, QA, and Security teams to integrate DevOps best practices across the SDLC. Lead incident management and root cause analysis for production issues. Ensure robust security practices for infrastructure and pipelines (secrets management, vulnerability scanning, etc. Guide and mentor team members, fostering a culture of continuous improvement and technical excellence. Evaluate and recommend new tools, technologies, and processes to improve operational Qualifications Education : Bachelor's degree in Computer Science, IT, or related field; Master's preferred. At least two current cloud certifications (e.g., AWS Solutions Architect, Azure Administrator, GCP DevOps Engineer, CKA, Terraform : 10+ years of relevant experience in DevOps, Infrastructure, or Cloud Operations. 5+ years of experience in a technical leadership or team lead Skills & Abilities : Expertise in at least two major cloud platform : AWS, Azure, or GCP. Strong experience with CI/CD tools such as Jenkins, GitLab CI, Azure DevOps, or similar. Hands-on experience with Infrastructure as Code (IaC) tools like Terraform, Ansible, or CloudFormation. Proficient in containerization and orchestration using Docker and Kubernetes. Strong knowledge of monitoring, logging, and alerting tools (e.g., Prometheus, Grafana, ELK, CloudWatch). Scripting knowledge in languages like Python, Bash, or Go. Solid understanding of networking, security, and system administration. Experience in implementing security best practices across DevOps pipelines. Proven ability to mentor, coach, and lead technical Skills : Experience with serverless architecture and microservices deployment. Experience with security tools and best practices (e.g., IAM, VPNs, firewalls, cloud security posture management). Exposure to hybrid cloud or multi-cloud environments. Knowledge of cost optimization and cloud governance strategies. Experience working in Agile teams and managing infrastructure in production-grade environments. Relevant certifications (AWS Certified DevOps Engineer, Azure DevOps Expert, CKA, Conditions : Work Arrangement : An occasionally hybrid opportunity based out of our Trivandrum office. Travel Requirements : Occasional travel may be required for team meetings, user research, or conferences. On-Call Requirements : Light on-call rotation may be required depending on operational needs. Hours of Work : Monday to Friday, 40 hours per week, with overlap with PST required as AOT s Values : Our values guide how we work, collaborate, and grow as a team. Every role at AOT is expected to embody and promote these values : Innovation : We pursue true innovation by solving problems and meeting unarticulated needs. Integrity : We hold ourselves to high ethical standards and never compromise. Ownership : We are all responsible for our shared long-term success. Agility : We stay ready to adapt to change and deliver results. Collaboration : We believe collaboration and knowledge-sharing fuel innovation and success. Empowerment : We support our people so they can bring the best of themselves to work every day. (ref:hirist.tech)
Posted 22 hours ago
5.0 years
0 Lacs
Kerala, India
On-site
We’re Hiring – DevOps Analyst | Trivandrum / Kochi (Kerala) 📍 Location: Trivandrum / Kochi, Kerala 💼 Experience: 5+ years total (5+ years relevant) Salary : 15 LPA ✅ Mandatory Skills GitHub Actions – CI/CD orchestration & automation Azure (Container Apps, Key Vault, Storage, Networking) Snyk – Security scanning for SCA, container images, and IaC SonarQube – Code quality, SAST, and technical debt management Infrastructure-as-Code (Bicep, ARM templates, Terraform) Test Automation – Unit, integration, performance & security DevSecOps – Secure pipelines with automated security gates & secrets management Jira integration – Automated change management processes ⚡ Primary Skills Cloudflare – CDN, caching rules, WAF, performance routing CI/CD Monitoring – Performance optimization & bottleneck resolution Troubleshooting – L3 support & root cause analysis Compliance – Regulatory/security standards for DevSecOps pipelines SAP Hybris – CI/CD automation for Hybris deployments Docker – Containerization & Azure deployment Mentoring – Guide teams on DevOps practices & tools adoption
Posted 22 hours ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: GenAI App Developer / Full Stack Developer / Python Backend Developer / API Developer / Prompt Engineer Experience - 6Yrs to 12 Yrs Location - Chennai We are seeking a skilled GenAI App Developer (or Full Stack Developer, Python Backend Developer, API Developer, Prompt Engineer) with expertise in API development, backend logic, machine learning, and NLP to contribute to large-scale GenAI applications. You'll work on API integrations, system performance optimization, and developing multi-agent workflows, all within a dynamic, collaborative environment. About the Role We are seeking a skilled GenAI App Developer (or Full Stack Developer, Python Backend Developer, API Developer, Prompt Engineer) with expertise in API development, backend logic, machine learning, and NLP to contribute to large-scale GenAI applications. You'll work on API integrations, system performance optimization, and developing multi-agent workflows, all within a dynamic, collaborative environment. Responsibilities API Integration & Development: Identify and define API integration points, ensuring clear documentation. Design, implement, and test API endpoints (e.g., /generate, /status). Auto-generate API documentation using FastAPI & Swagger. Implement rate limiting (Flask-Limiter) and authentication (OAuth, API keys). LLM & NLP Integration: Develop prompting logic for Large Language Models (LLMs) to ensure accurate responses. Integrate machine learning frameworks (e.g., PyTorch) and NLP libraries (e.g., spaCy). Design and implement multi-agentic workflows using patterns like actor model, publish-subscribe, and client-server. Multi-Agentic System Design: Build and coordinate multi-agentic systems, ensuring efficient task delegation, communication, and failure handling across agents. Develop distributed task management using tools like Celery and Kubernetes. Testing & Debugging: Write unit/integration tests with Pytest. Set up logging and monitoring for system health and debugging. Database & Caching: Integrate with MySQL, PostgreSQL, NoSQL (e.g., BigQuery, MongoDB), and vector databases (e.g., Pinecone). Implement caching strategies (e.g., Redis, Memcached) to optimize performance. Security & Compliance: Ensure secure API access and data protection (OAuth, API keys, input validation). Qualifications Proven experience in API development (e.g., FastAPI, Flask, Django). Strong knowledge of Python, machine learning (PyTorch), and NLP (e.g., spaCy). Expertise in API authentication (OAuth, API keys) and API documentation (Swagger). Experience with task queues (Celery) and multi-agent workflows. Hands-on experience with databases (MySQL, PostgreSQL, BigQuery, NoSQL). Familiarity with caching (Redis, Memcached) and cloud platforms (AWS, Google Cloud, Azure). Required Skills Experience with vector databases (e.g., Pinecone, Weaviate, Cloud-based AI search (Azure AI Search). Knowledge of CI/CD pipelines and containerization (e.g., Docker, Kubernetes). Familiarity with API design tools (e.g., Postman) and rate limiting (Flask-Limiter). Preferred Skills API Frameworks: FastAPI, Flask, Django Machine Learning & NLP: PyTorch, spaCy Task Management: Celery Databases: MySQL, PostgreSQL, BigQuery, MongoDB, Pinecone, Weaviate Caching: Redis, Memcached Cloud Platforms: AWS, Google Cloud, Azure Version Control: Git Security & Monitoring: OAuth, API keys, Python logging module
Posted 23 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |