Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 years
0 Lacs
India
Remote
Job Title: Senior Azure Data Engineer – Remote Contract Location: Remote Contract Type: Full-time Contract Experience Required: 7+ Years (including healthcare domain experience) Duration: [Specify Contract Duration – 6 months Start Date: Immediate About the Role We are seeking an experienced Azure Data Engineer with a proven track record in healthcare domain projects to join our remote team on a contract basis. The ideal candidate will have strong expertise in Microsoft Azure data services , big data processing , and ETL pipeline development . You will work closely with our analytics, BI, and cloud architecture teams to design, implement, and optimize secure, compliant, and scalable data solutions for healthcare applications. Key Responsibilities Design, develop, and maintain Azure Data Factory pipelines for ETL workflows. Build and optimize PySpark/Databricks scripts for large-scale healthcare data processing. Create and manage data lake and data warehouse solutions using Azure Data Lake Storage Gen2 and Azure Synapse Analytics. Integrate data from healthcare-specific sources such as EHR/EMR systems, HL7/FHIR APIs, and other medical data feeds. Implement Delta Lake for optimized big data storage and querying. Ensure data security, HIPAA compliance, and governance in all data workflows. Collaborate with BI teams to deliver analytical dashboards in Power BI for healthcare insights and reporting. Participate in Agile/Scrum ceremonies and maintain detailed technical documentation. Required Skills & Qualifications 7+ years of experience in Data Engineering , with at least 3 years in Azure Cloud . Mandatory: Minimum 2+ years of experience in healthcare domain projects with exposure to healthcare standards (HIPAA, HL7, FHIR, ICD codes). Proficiency in Azure Data Factory, Azure Synapse Analytics, Azure Databricks, and Azure Data Lake Storage . Strong programming skills in SQL, Python, and PySpark . Experience in data modeling (Star/Snowflake schema) and data warehousing concepts . Hands-on experience with Delta Lake, Apache Spark , and distributed data processing . Familiarity with CI/CD tools like Azure DevOps or GitHub Actions. Strong problem-solving skills and ability to work independently in a remote environment.
Posted 4 days ago
0.0 - 1.0 years
6 - 8 Lacs
Mumbai, Maharashtra
On-site
Job Title: AWS & DevOps Engineer (3 Years Experience) Location: [Pune/Mumbai] Experience: 2.5 to 3+ Years Employment Type: Full-Time Roles & Responsibilities: ● Deploy, configure, and manage AWS cloud infrastructure (EC2, VPC, S3, RDS, IAM, CloudWatch, ELB, etc.) ● Set up and maintain CI/CD pipelines using Jenkins, GitLab CI, or GitHub Actions ● Implement Infrastructure as Code (IaC) using Terraform or AWS CloudFormation ● Build, deploy, and manage Docker containers and orchestration using Kubernetes/EKS ● Monitor system health, availability, and performance with tools like CloudWatch, Prometheus, Grafana, or New Relic. ● Automate repetitive tasks and improve deployment processes using shell scripts or Python ● Apply security best practices for cloud and container environments ● Collaborate with development and QA teams to support automated build/test/release Key Skills Required: ● 3+ years of experience with AWS Cloud Services ● Strong hands-on experience with Linux administration ● Proficiency in CI/CD tools (Jenkins, GitHub Actions, GitLab CI) ● Working knowledge of Docker and Kubernetes ● Experience in Terraform or other IaC tools ● Version control systems like Git ● Familiarity with scripting (Bash, Shell, or Python) ● Understanding of security, networking, and firewall configurations Soft Skills: ● Proactive problem-solving and troubleshooting mindset ● Ability to work independently and in a fast-paced team environment ● Good communication and collaboration skills Good to Have: ● AWS Certification (e.g., AWS Certified Solutions Architect – Associate) ● Basic understanding of DevSecOps practices ● Experience with Agile and Scrum methodologies ● MLOps related services Note: # Ready to move onsite (Domestic/International) if there is any requirement Job Types: Full-time, Permanent Pay: ₹600,000.00 - ₹800,000.00 per year Benefits: Health insurance Paid sick time Paid time off Provident Fund Experience: DevOps: 3 years (Required) AWS: 1 year (Required) Location: Mumbai, Maharashtra (Preferred) Work Location: In person Speak with the employer +91 7876212244
Posted 4 days ago
1.0 years
0 Lacs
India
Remote
Hi Folks Please check the JD and share your updated resume to my email naresh@sapphiresoftwaresolutions.com and ping me on whatsapp (+91 970-529-6474) along with your resume SOX Controls Tester 1 year contract-Remote Hours: Night Shift Must Have: 6-8 years of SOX Control Testing experience Extensive knowledge of SOX ITGC and ITAC controls Must have hands on knowledge with COBIT framework and be familiar with NIST/ COSO Expert-level Excel skills (pivot tables, complex formulas) Expert level experience conducting UAR on SailPoint Experience testing controls of cloud, SAP, and DevOps tools (GitHub, Gitlab, Azure, AWS) Experience with one of the Big Four (Deloitte, EY, PwC, KPMG) Plusses: CISA Certification (Certified Information Systems Auditor) CISSP Certification (Certified Information Systems Security Professional) Job Summary: We are seeking a SOX Controls Tester with deep expertise in ITGC and ITAC to support SOX monitoring efforts across various systems, with a particular focus on testing in SailPoint. This role operates within the first line of defense, contributing to SOX readiness initiatives. The ideal candidate will possess a strong understanding of SOX compliance requirements and the ITGC/ITAC framework, with proven experience in designing, executing, and documenting control testing procedures. Responsibilities include identifying control deficiencies, recommending effective remediation strategies, and managing the end-to-end audit process. Advanced Excel skills are essential, including proficiency with complex formulas, pivot tables, and large datasets. The candidate must also be skilled in scripting languages to extract and analyze data, and capable of troubleshooting issues within automated scripts and data analysis workflows. Strong verbal and written communication skills are critical for documenting findings and collaborating with IT and business stakeholders. A meticulous attention to detail is required to ensure accuracy and thoroughness in all aspects of testing and documentation.
Posted 4 days ago
6.0 - 8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Our technology services client is seeking multiple Dot Net Full Stack Developer- React JS to join their team on a contract basis. These positions offer a strong potential for conversion to full-time employment upon completion of the initial contract period. Below are further details about the role: Role: Dot Net Full Stack Developer- React JS Experience: 6- 8 Years Location: Hyderabad, Gurugram Notice Period: Immediate- 15 Days Mandatory Skills: React.js , UX/UI designers, Node.js, RESTful APIs, GQL, EC2, S3, Lambda, Kafka , GITHUB, RDBMS Job Description: Key Responsibilities: Design, develop, and maintain web applications using React for frontend development and Node.js or similar technologies for backend services. Develop and consume RESTful APIs to facilitate seamless communication between frontend and backend systems. Collaborate with UX/UI designers to implement responsive and user-friendly interfaces. Utilize RDBMS (Relational Database Management Systems) for data storage and retrieval, ensuring optimal database design and performance. Implement AWS services for application deployment, scaling, and management, ensuring high availability and security. Write and optimize Graph Query Language (GQL) queries to interact with graph databases effectively. Integrate message systems like Kafka for real time data processing and event driven architecture. Conduct code reviews, unit testing, and debugging to maintain code quality and performance. Participate in agile development processes, including sprint planning, daily standups, and retrospectives. Qualifications: Bachelor’s degree in computer science or equivalent. Proven experience in full-stack development with a strong focus on React and REST APIs. In-depth knowledge of RDBMS technologies (MSSQL/ MySQL/PostgreSQL). Familiarity with AWS services (e.g., EC2, S3, Lambda) and cloud architecture. Experience with Graph Query Language (GQL) and graph databases. Knowledge of message systems like Kafka or similar technologies. Proficiency in using AI tools such as GitHub Copilot for coding assistance and productivity. Strong problem-solving skills and the ability to work independently and collaboratively. Excellent communication skills, both verbal and written. If you are interested, share the updated resume to hema.g@s3staff.com
Posted 4 days ago
13.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Senior Data Engineer (AWS, Python, SQL, Big Data) Experience: 8 – 13 Years Location: Gurgaon Interview Mode: Face-to-Face Drive — 30th August Domain Preference: Financial services industry experience preferred Notice Period: 60 to 90 Days Joining: Immediate joiners will also be considered if available Position Overview We are seeking an experienced Senior Data Engineer with strong expertise in AWS, Python, SQL, and Big Data technologies , along with modern data pipeline development experience. The successful candidate will design, develop, and maintain scalable data engineering solutions, contribute to CI/CD delivery pipelines, and play a key role in analytics and data-driven development within a fast-paced enterprise environment. Key Capabilities Passion for technology and keeping up with the latest trends Ability to articulate complex technical issues and system enhancements Proven analytical and evidence-based decision-making skills Strong problem-solving, troubleshooting, and documentation abilities Excellent written and verbal communication skills Effective collaboration and interpersonal skills High delivery focus with commitment to quality and auditability Ability to self-manage and work in a fast-paced environment Agile software development practices Desired Skills & Experience Hands-on experience in SQL and Big Data SQL variants ( HiveQL, Snowflake ANSI, Redshift SQL ) Expertise in Python , Spark (PySpark, Spark SQL, Scala), and Bash/Shell scripting Experience with source code control tools ( GitHub, VSTS, BitBucket ) Familiarity with Big Data technologies: Hadoop stack (HDFS, Hive, Impala, Spark) and cloud warehouses ( AWS Redshift, Snowflake ) Unix/Linux command-line experience AWS services exposure: EMR, Glue, Athena, Data Pipeline, Lambda Knowledge of Data Models (Star Schema, Data Vault 2.0) Essential Experience 8–13 years of technical experience, preferably in the financial services industry Strong background in Data Engineering/BI/Software Development, ELT/ETL, and data transformation in Data Lake / Data Warehouse / Lake House environments Programming with Python, SQL, Unix Shell scripts, and PySpark in enterprise-scale environments Experience in configuration management ( Ansible, Jenkins, Git ) Cloud design and development experience with AWS and Azure Proficiency with AWS services ( S3, EC2, EMR, SNS, SQS, Lambda, Redshift ) Building data pipelines on Databricks Delta Lake from databases, flat files, and streaming sources CI/CD pipeline automation ( Jenkins, Docker ) Experience with Terraform, Kubernetes, and Docker RDBMS experience: Oracle, MS SQL, DB2, PostgreSQL, MySQL – including performance tuning and stored procedures Knowledge of Power BI (recommended) Qualification Requirements Bachelor’s or Master’s degree in a technology-related discipline (Computer Science, IT, Data Engineering, etc.) Key Accountabilities Design, develop, test, deploy, maintain, and improve software and data solutions Create technical documentation, flowcharts, and layouts to define solution requirements Write clean, high-quality, testable code Integrate software components into fully functional platforms Apply best practices for CI/CD and cloud-based deployments Mentor other team members and share data engineering best practices Troubleshoot, debug, and upgrade existing solutions Ensure compliance with industry standards and regulatory requirements Interview Drive Details Mode: Face-to-Face (F2F) Date: 30th August Location: Gurgaon Notice Period: 60 to 90 Days (Immediate joiners considered as a plus)
Posted 4 days ago
4.0 - 11.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Hello, Greeting from Quess Corp!! Hope you are doing well we have job opportunity with one of our client Designation_ Data Engineer Location – Gurugram Experience – 8yrs to 15 Yrs Qualification – Graduate / PG ( IT) Skill Set – Data Engineer, Python, AWS, SQL Essential capabilities Enthusiasm for technology, keeping up with latest trends Ability to articulate complex technical issues and desired outcomes of system enhancements Proven analytical skills and evidence-based decision making Excellent problem solving, troubleshooting & documentation skills Strong written and verbal communication skills Excellent collaboration and interpersonal skills Strong delivery focus with an active approach to quality and auditability Ability to work under pressure and excel within a fast-paced environment Ability to self-manage tasks Agile software development practices Desired Experience Hands on in SQL and its Big Data variants (Hive-QL, Snowflake ANSI, Redshift SQL) Python and Spark and one or more of its API (PySpark, Spark SQL, Scala), Bash/Shell scripting Experience with Source code control - GitHub, VSTS etc. Knowledge and exposure to Big Data technologies Hadoop stack such as HDFS, Hive, Impala, Spark etc, and cloud Big Data warehouses - RedShift, Snowflake etc. Experience with UNIX command-line tools. Exposure to AWS technologies including EMR, Glue, Athena, Data Pipeline, Lambda, etc Understanding and ability to translate/physicalise Data Models (Star Schema, Data Vault 2.0 etc) Essential Experience It is expected that the role holder will most likely have the following qualifications and experience 4-11 years technical experience (within financial services industry preferred) Technical Domain experience (Subject Matter Expertise in Technology or Tools) Solid experience, knowledge and skills in Data Engineering, BI/software development such as ELT/ETL, data extraction and manipulation in Data Lake/Data Warehouse/Lake House environment. Hands on programming experience in writing Python, SQL, Unix Shell scripts, Pyspark scripts, in a complex enterprise environment Experience in configuration management using Ansible/Jenkins/GIT Hands on cloud-based solution design, configuration and development experience with Azure and AWS Hands on experience of using AWS Services - S3,EC2, EMR, SNS, SQS, Lambda functions, Redshift Hands on experience Of building Data pipelines to ingest, transform on Databricks Delta Lake platform from a range of data sources - Data bases, Flat files, Streaming etc.. Knowledge of Data Modelling techniques and practices used for a Data Warehouse/Data Mart application. Quality engineering development experience (CI/CD – Jenkins, Docker) Experience in Terraform, Kubernetes and Docker Experience with Source Control Tools – Github or BitBucket Exposure to relational Databases - Oracle or MS SQL or DB2 (SQL/PLSQL, Database design, Normalisation, Execution plan analysis, Index creation and maintenance, Stored Procedures) , PostGres/MySQL Skilled in querying data from a range of data sources that store structured and unstructured data Knowledge or understanding of Power BI (Recommended) Key Accountabilities Design, develop, test, deploy, maintain and improve software Develop flowcharts, layouts and documentation to identify requirements & solutions Write well designed & high-quality testable code Produce specifications and determine operational feasibility Integrate software components into fully functional platform Apply pro-actively & perform hands-on design and implementation of best practice CI/CD Coaching & mentoring of other Service Team members Develop/contribute to software verification plans and quality assurance procedures Document and maintain software functionality Troubleshoot, debug and upgrade existing systems, including participating in DR tests Deploy programs and evaluate customer feedback Contribute to team estimation for delivery and expectation management for scope. Comply with industry standards and regulatory requirements
Posted 4 days ago
5.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
EXL Decision Analytics EXL (NASDAQ:EXLS) is a leading operations management and analytics company that helps businesses enhance growth and profitability in the face of relentless competition and continuous disruption. Using our proprietary, award-winning Business EXLerator Framework™, which integrates analytics, automation, benchmarking, BPO, consulting, industry best practices and technology platforms, we look deeper to help companies improve global operations, enhance data-driven insights, increase customer satisfaction, and manage risk and compliance. EXL serves the insurance, healthcare, banking and financial services, utilities, travel, transportation and logistics industries. Headquartered in New York, New York, EXL has more than 24,000 professionals in locations throughout the United States, Europe, Asia (primarily India and Philippines), Latin America, Australia and South Africa. EXL Analytics provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach. Leveraging proprietary methodology and best-of-breed technology, EXL Analytics takes an industry-specific approach to transform our clients' decision making and embed analytics more deeply into their business processes. Our global footprint of nearly 2,000 data scientists and analysts assist client organizations with complex risk minimization methods, advanced marketing, pricing and CRM strategies, internal cost analysis, and cost and resource optimization within the organization. EXL Analytics serves the insurance, healthcare, banking, capital markets, utilities, retail and e-commerce, travel, transportation and logistics industries. Please visit www.exlservice.com for more information about EXL Analytics. Job Overview We are looking for a skilled Data Engineer with strong expertise in Python, Databricks, PySpark, Plotly Dash, Data Analysis, SQL, and Query Optimization. The ideal candidate will be responsible for developing scalable data pipelines, performing complex data analysis, and building interactive dashboards to support business decision-making. Key Responsibilities Design, develop, and maintain scalable and efficient data pipelines using PySpark and Databricks. Perform data extraction, transformation, and loading (ETL) from diverse structured and unstructured data sources. Write and optimize complex SQL queries for high performance and scalability across large datasets. Build and maintain interactive dashboards and data visualizations using Plotly Dash or similar frameworks. Collaborate closely with data scientists, analysts, and business stakeholders to gather and understand data requirements. Ensure data quality, consistency, and integrity throughout the data lifecycle using validation and monitoring techniques. Develop and maintain modular, reusable, and well-documented code and technical documentation for data workflows and processes. Implement data governance, security, and compliance best practices. Candidate Profile 5+ years of relevant experience in Data Engineering tools Programming Languages: Python and SQL Python Frameworks: Plotly Dash, Flask, Fast API Data Processing Tools: pandas, NumPy, PySpark Cloud Platforms: Databricks (for scalable computing resources) Version Control & Collaboration: Git, GitHub, GitLab Deployment and Monitoring: Databricks ,Docker, Kubernetes What We Offer EXL Analytics offers an exciting, fast-paced and innovative environment, which brings together a group of sharp and entrepreneurial professionals who are eager to influence business decisions. From your very first day, you get an opportunity to work closely with highly experienced, world-class analytics consultants. You can expect to learn many aspects of businesses that our clients engage in. You will also learn effective teamwork and time-management skills—key aspects for personal and professional growth. Analytics requires different skill sets at different levels within the organization. At EXL Analytics, we invest heavily in training you in all aspects of analytics as well as in leading analytical tools and techniques. We provide guidance/coaching to every employee through our mentoring program wherein every junior level employee is assigned a senior level professional as advisor. Sky is the limit for our team members. The unique experiences gathered at EXL Analytics set the stage for further growth and development in our company and beyond.
Posted 4 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Location: Gurgaon, Haryana Job Type: In Office: Full-Time Role Type: Individual contributor held accountable to their technical expertise. No team management required. Experience Level: 3+ Years Team: Product & Engineering Reports to: Project Manager Annual CTC: 10-15 LPA Role Objective To independently develop, maintain, and optimize CITiLIGHT’s cross-platform mobile application using React Native and ensure design and functionality consistency with the existing Angular-based web platform. The role requires collaboration with backend engineers, web developers, and designers to build a seamless mobile experience for global IoT-led smart infrastructure solutions. About CITiLIGHT CITiLIGHT is reshaping how smart infrastructure works globally. We operate at the intersection of smart cities, sustainability, and innovation, helping city administrators and private stakeholders deploy cutting-edge IoT-led solutions at scale. As we expand our platform capabilities, we are looking for passionate and skilled engineers to help us build robust, user-friendly, and scalable applications that drive real-world impact. Key Responsibilities 1. Mobile App Development & Optimization Design, develop, and maintain cross-platform applications using React Native (JavaScript/TypeScript). Ensure high performance and compatibility across both Android and iOS platforms. 2. UI/UX Implementation Build UIs based on provided designs or from scratch when needed using tools like Figma, Adobe XD, or Zeplin. Collaborate with Angular web developers for design consistency across platforms. Build clean, responsive, and intuitive interfaces that align with user behavior and IoT interaction models. 3. API Integration & Communication Integrate RESTful APIs and WebSocket endpoints Ensure robust error handling, data parsing, and session management. Handle real-time data sync, push notifications, and asynchronous operations. 4. Cross-Functional Collaboration Coordinate with other members of the development/R & D team for timely and aligned delivery. Contribute to sprint planning, code reviews, and product discussions. Participate in version control, documentation, and knowledge sharing efforts. Participate in/contribute to org level events like townhalls, stepbacks and retreats. Job Requirements Technical Skills React Native: Proven experience in building and deploying cross-platform mobile apps. JavaScript & TypeScript: Strong command of both for scalable and typed development. Angular (Basic to Intermediate): Familiarity with component architecture for design parity. API Integration: REST, JSON, WebSockets; handling response states and errors gracefully. UI/UX Tools: Figma, Adobe XD, or Zeplin for design interpretation. Version Control: Git/GitHub workflows and pull request practices. Mobile Debugging: Tools like Flipper, Chrome DevTools for RN. Basic knowledge in backend development using Java Familiarity with mobile deployment processes (Google Play Console, App Store Connect). Nice to Have Exposure to IoT protocols (MQTT, BLE) or device integrations. Experience with Capacitor/Cordova to extend Angular apps into mobile. Knowledge of offline data handling or background services in mobile. Soft Skills Strong ownership and self-management to ensure effectiveness in a lean team environment. Clear and proactive communication with cross-functional stakeholders. Attention to detail, especially in design fidelity and app performance. Curiosity and adaptability in working with emerging technologies and tools. Aligning with our core value of pursuing excellence. Desire to create world class quality of work Education & Experience Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 3+ years of professional experience in mobile development with React Native. Prior work on mobile apps involving real-time data, IoT, or infrastructure is a plus.
Posted 4 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: ML Ops Engineer / ML Engineer Experience - 5Yrs -10 Yrs Location - Chennai Job Overview: We are looking for an experienced MLOps Engineer to help deploy, scale, and manage machine learning models in production environments. You will work closely with data scientists and engineering teams to automate the machine learning lifecycle, optimize model performance, and ensure smooth integration with data pipelines. Key Responsibilities: Transform prototypes into production-grade models Assist in building and maintaining machine learning pipelines and infrastructure across cloud platforms such as AWS, Azure, and GCP. Develop REST APIs or FastAPI services for model serving, enabling real-time predictions and integration with other applications. Collaborate with data scientists to design and develop drift detection and accuracy measurements for live models deployed. Collaborate with data governance and technical teams to ensure compliance with engineering standards. Maintain models in production Collaborate with data scientists and engineers to deploy, monitor, update, and manage models in production. Manage the full CI/CD cycle for live models, including testing and deployment. Develop logging, alerting, and mitigation strategies for handling model errors and optimize performance. Troubleshoot and resolve issues related to ML model deployment and performance. Support both batch and real-time integrations for model inference, ensuring models are accessible through APIs or scheduled batch jobs, depending on use case. Contribute to AI platform and engineering practices Contribute to the development and maintenance of the AI infrastructure, ensuring the models are scalable, secure, and optimized for performance. Collaborate with the team to establish best practices for model deployment, version control, monitoring, and continuous integration/continuous deployment (CI/CD). Drive the adoption of modern AI/ML engineering practices and help enhance the team’s MLOps capabilities. Develop and maintain Flask or FastAPI-based microservices for serving models and managing model APIs. Minimum Required Skills: Bachelor's degree in computer science, analytics, mathematics, statistics. Strong experience in Python, SQL, Pyspark. Solid understanding and knowledge of containerization technologies (Docker, Podman, Kubernetes). Proficient in CI/CD pipelines, model monitoring, and MLOps platforms (e.g., AWS SageMaker, Azure ML, MLFlow). Proficiency in cloud platforms, specifically AWS, Azure and GCP. Familiarity with ML frameworks such as TensorFlow, PyTorch, Scikit-learn. Familiarity with batch processing integration for large-scale data pipelines. Experience with serving models using FastAPI, Flask, or similar frameworks for real-time inference. Certifications in AWS, Azure or ML technologies are a plus. Experience with Databricks is highly valued. Strong problem-solving and analytical skills. Ability to work in a team-oriented, collaborative environment. Tools and Technologies: Model Development & Tracking: TensorFlow, PyTorch, scikit-learn, MLflow, Weights & Biases Model Packaging & Serving: Docker, Kubernetes, FastAPI, Flask, ONNX, TorchScript CI/CD & Pipelines: GitHub Actions, GitLab CI, Jenkins, ZenML, Kubeflow Pipelines, Metaflow Infrastructure & Orchestration: Terraform, Ansible, Apache Airflow, Prefect Cloud & Deployment: AWS, GCP, Azure, Serverless (Lambda, Cloud Functions) Monitoring & Logging: Prometheus, Grafana, ELK Stack, WhyLabs, Evidently AI, Arize Testing & Validation: Pytest, unittest, Pydantic, Great Expectations Feature Store & Data Handling: Feast, Tecton, Hopsworks, Pandas, Spark, Dask Message Brokers & Data Streams: Kafka, Redis Streams Vector DB & LLM Integrations (optional): Pinecone, FAISS, Weaviate, LangChain, LlamaIndex, PromptLayer
Posted 4 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our client is prominent Indian multinational corporation specializing in information technology (IT), consulting, and business process services and its headquartered in Bengaluru with revenues of gross revenue of ₹222.1 billion with global work force of 234,054 and listed in NASDAQ and it operates in over 60 countries and serves clients across various industries, including financial services, healthcare, manufacturing, retail, and telecommunications. The company consolidated its cloud, data, analytics, AI, and related businesses under the tech services business line. Major delivery centers in India, including cities like Chennai, Pune, Hyderabad, and Bengaluru, kochi, kolkatta, Noida. Job Title: Datadog · Location: Pune, Bangalore(Hybrid) · Experience: 5+ yrs · Job Type : Contract to hire. · Notice Period:- Immediate joiners. Mandatory Skills: 5+ years of Hands-on experience with Datadog’s stack in multi-cloud or hybrid cloud environments. Strong background in systems engineering or software development. Experience with Kubernetes and cloud platforms (AWS, GCP, Azure). sTRONG Proficiency in basic Programming & Scripting languages like Go, Python, or Java. Familiarity with monitoring, alerting, and incident response practices. Deep understanding of cloud-native architectures and microservices. Experience with high-throughput, low-latency systems. Strong communication skills. Experience with CI/CD pipelines and monitoring tools. Deep understanding of Windows and Linux systems, networking, and operating system internals. Experience with distributed systems and high-availability architectures. Strong experience with Docker, Kubernetes and service mesh technologies. Tools like Terraform, Ansible, or Pulumi (Optional) if present would be an extra advantage Building dashboards, Monitors, and Alert Setup systems. Familiarity with Jenkins, GitHub Actions, CircleCI, or similar. Automating deployments, rollbacks, and testing pipelines.
Posted 4 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Notice Period : 0-15 days only. REQUIRED SKILLS : (80-90%) 1. 3 or more years Embedded Software Development 2. Experience with formal software development process (such as Agile) 3. Knowledge of Embedded Software Development Tools like VSCode, C2000 SDK, make utilities (Cmake), mem map configurations, etc. 4. C/C++ should have hands-on experience 5. DevOps Tools - Github, Git configurations for automation of Pre & post hooks 6. Experience developing in a Unix/Linux environment (Yocto) 7. RTOS & Linux basic knowledge 8. Passion for software DESIRED SKILLS : (10-20%) 1. Github actions -> Yaml 2. Bash & Python scripting 3. Experience with Github Cookie cutter 4. Experience with Docker containers - Need to have working knowledge 5. Knowledgeable of theory and use of Test-Driven Development (Gtest) 6. Visual Studio code extensions and plugin creation 7. Basic Understanding of REST APIs 8. Basics on cyber security Please share your cv on jigneshkumar.s@acldigital.com
Posted 4 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role Overview Location - Gurgaon This opening is for our client company. We’re looking for a Senior Project Manager / Program Manager to lead multiple high-impact AI programs for global aviation clients. This role blends technical program leadership, strategic client engagement, and cross-functional team management to deliver innovation at scale. If you thrive on driving AI projects from concept to delivery, managing complexity, and working with brilliant data scientists, engineers, and aviation experts, this is your runway. Key Responsibilities Lead end-to-end planning, execution, and delivery of multiple AI/ML projects in the aviation domain. Define project scope, objectives, and success criteria in alignment with client requirements and Futops’ strategic goals. Manage multi-disciplinary teams (Data Scientists, AI/ML Engineers, Software Developers, QA, DevOps) to ensure on-time, high-quality delivery. Collaborate with aviation domain experts to ensure solutions meet industry safety, compliance, and operational standards. Oversee resource allocation, risk management, change control , and budget tracking for all projects in the program. Serve as the primary client contact , providing regular updates, resolving escalations, and ensuring high customer satisfaction. Drive agile project management practices , continuous improvement, and team motivation. Coordinate integration of AI solutions with client’s existing aviation systems and infrastructure. Track and report program-level KPIs to senior leadership and stakeholders. Must-Have Skills 10+ years of project/program management experience, with at least 4 years managing AI/ML or data-driven software projects . Proven track record of delivering complex, multi-stakeholder technology programs . Strong understanding of AI/ML development lifecycle , data pipelines, and model deployment. Excellent stakeholder management, communication, and negotiation skills . Experience in budgeting, forecasting, and resource planning for large-scale projects. Familiarity with aviation industry processes, safety standards, and regulations . Nice-to-Have Skills Exposure to aviation-specific AI applications such as predictive maintenance, route optimization, passenger analytics, or airport operations. Knowledge of computer vision, NLP, and edge AI deployments . PMP / PRINCE2 / Agile certifications. Experience working with international aviation clients and multi-time-zone teams. Familiarity with regulatory compliance frameworks in aviation (e.g., FAA, EASA). Tools & Technologies Project Management: Jira, Confluence, MS Project, Trello AI/ML Collaboration: MLflow, Weights & Biases, DataRobot, Jupyter Communication: Slack, MS Teams, Zoom Cloud Platforms: AWS, Azure, GCP (AI/ML services) Version Control & CI/CD: Git, GitHub, GitLab, Jenkins KPIs & Expected Outcomes On-Time Delivery : ≥ 95% of milestones met within agreed timelines. Quality Metrics : Less than 3% post-deployment defects in AI deliverables. Client Satisfaction : Maintain a CSAT score ≥ 4.5/5 across projects. Budget Adherence : ±5% variance from approved budgets. Team Productivity : Achieve ≥ 90% planned sprint completion rate. Innovation Contribution : Drive at least 2 process improvements or solution innovations per quarter.
Posted 4 days ago
7.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Title – Integration Specialist (IBM ACE/IIB) Total Years of Experience – 7+ Years Relevant Years of Experience – 7 Years Mandatory Skills – IBM App Connect Enterprise (ACE) / IBM Integration Bus (IIB) ESQL, MQ, REST/SOAP APIs Kubernetes / OpenShift (for CP4I) DevOps tools – Jenkins, Ansible, Terraform Job Description – Nice to Have Skills – Banking domain experience Design, deploy, and manage integration solutions using IBM App Connect Enterprise (ACE) to connect applications, APIs, and data across hybrid cloud environments. Develop and deploy integration flows using ESQL, Java, REST/SOAP. Administer IBM ACE including installation, configuration, and monitoring. Deploy ACE solutions on-premises, on cloud platforms (IBM Cloud, AWS, Azure), or on Cloud Pak for Integration (CP4I). Implement security standards (TLS, OAuth) and optimize performance. Automate deployments using CI/CD tools (Jenkins, GitHub Actions) and scripting (Bash, Python). Troubleshoot integration issues and provide technical support. Preferred Certifications – IBM Certified Developer/Admin – ACE Red Hat OpenShift (for CP4I) Location – Mumbai (Only)
Posted 4 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title: AWS services Developer Skills : AWS , Typescript , AWS Lambda, API Gateway, DynamoDB, RDS, SQS Job Locations: Hyderabad Experience: 5-10Years Budget: 15LPA Education Qualification : Any Graduation Work Mode: Hybrid Employment Type: Contract Notice Period: Immediate - 15 Days Interview Mode: 2 Rounds of Technical Interview Job Description: Key Responsibilities: • Design, build, and maintain serverless applications using AWS Lambda, API Gateway, DynamoDB, RDS, SQS, and Route 53. • Implement secure, scalable integrations across internal and external systems. • Write clean, testable TypeScript code aligned with technical specifications. • Write unit and Api tests • Write CDK scripts to support infrastructure deployment • Manage CI/CD pipelines (preferably GitHub Actions) and support different deployment strategies. • Collaborate with solution architects and analysts to understand and deliver requirements. Required Skills: • 3+ years of professional experience with TypeScript and AWS. • Proven experience with serverless architectures and event-driven systems. • Solid grasp of cloud security best practices. • Familiar with CI/CD, GitHub, and infrastructure-as-code. • Experience with DynamoDB Streams and CloudWatch. • Familiarity with API specifications (OpenAPI/Swagger). • Experience of working in an Agile environment. • Good people and interpersonal skills Interested Candidates please share your CV to sushma.n@people-prime.com
Posted 4 days ago
10.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Title: Lead Technical Architect Location: Ahmedabad Employment Type: Full-time Experience Level: 10+ Years Key Responsibilities 1. Architecture & Design · Develop end-to-end architecture blueprints for large-scale enterprise applications. · Define component-based and service-oriented architectures (Microservices, SOA, Event-Driven). · Create API-first designs using REST, GraphQL, and gRPC with clear versioning strategies. · Establish integration patterns for internal systems, third-party APIs, and middleware . · Design cloud-native architectures leveraging AWS, Azure, or GCP services. · Define coding guidelines , performance benchmarks, and security protocols. Participate in POC projects to evaluate new tools and frameworks. 2. Performance, Security, & Scalability · Implement caching strategies (Redis, Memcached, CDN integrations). · Ensure horizontal and vertical scalability of applications. · Apply security best practices : OAuth 2.0, JWT, SAML, encryption (TLS/SSL, AES), input validation, and secure API gateways. Set up application monitoring and logging using ELK, Prometheus, Grafana, or equivalent. 3. DevOps & Delivery · Define CI/CD workflows using Jenkins, GitHub Actions, Azure DevOps, or GitLab CI . · Collaborate with DevOps teams for container orchestration (Docker, Kubernetes). · Integrate automated testing pipelines (unit, integration, and load testing). Required Technical Skills Programming & Frameworks: · Expertise in one or more enterprise languages: Core, Node.js . · Strong understanding of front-end technologies (Angular, React) for full-stack integration. Architecture & Patterns: · Microservices, Domain-Driven Design (DDD), Event-Driven Architecture (EDA). · Message brokers and streaming: Kafka, RabbitMQ, Azure Event Hub, Azure Service Bus . Databases & Storage: · Relational DBs: PostgreSQL, MySQL, MS SQL Server . · NoSQL DBs: MongoDB . · Caching layers: Redis, Memcached . Cloud & Infrastructure: · Azure (App Services, Functions, API Management, Cosmos DB), Security: · OAuth 2.0, SAML, OpenID Connect, JWT. Secure coding practices, threat modelling, penetration testing familiarity. DevOps & CI/CD: · Azure DevOps, GitLab CI/CD. · Docker, Kubernetes. Testing & Quality Assurance: · Unit testing (JUnit, NUnit, PyTest, Mocha). Performance/load testing (JMeter, Locust). Monitoring & Observability: · Azure Monitoring, App Insight, Prometheus, Grafana Preferred Skills & Certifications · Microsoft Certified: Azure Solutions Architect Expert , · Exposure to AI/ML services and IoT architectures . KPIs for Success · Reduced system downtime through robust architecture designs. · Improved performance metrics and scalability readiness. · Successful delivery of complex projects without major architectural rework. · Increased developer productivity through better standards and tools adoption.
Posted 4 days ago
8.0 years
0 Lacs
India
On-site
The ideal candidate will be responsible for developing high-quality applications. They will also be responsible for designing and implementing testable and scalable code. We are looking for a Full Stack-React.js Developer. Apply with an updated CV at sony.pathak@aptita.com Lead Engineer- React Notice period: Immediate to 30 days Experience range: 8 years Must have exp: Reactjs, node js Responsibilities Education and experience: ○ Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. ○ Minimum of 8 years of professional experience in full-stack development. ● Technical Requirements: ○ Proficiency in JavaScript, including ES6 and beyond, asynchronous programming, closures, and prototypal inheritance. ○ Expertise in modern front-end frameworks/libraries (React, Vue.js). ○ Strong understanding of HTML5, CSS3, and pre-processing platforms like SASS or LESS. ○ Experience with responsive and adaptive design principles. ○ Knowledge of front-end build tools like Webpack, Babel, and npm/yarn. ○ Proficiency in Node.js and frameworks like Express.js, Koa, or NestJS. ○ Experience with RESTful API design and development. ○ Experience with Serverless.(Lambda, CloudFunctions) ○ Experience with GraphQL. ○ Experience with SQL databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Redis). ○ Experience with caching & search frameworks. (Redis, ElasticSearch) ○ Proficiency in database schema design and optimization. ○ Experience with containerization tools (Docker, Kubernetes). ○ Experience with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI). ○ Knowledge of cloud platforms (AWS, Azure, Google Cloud). ○ Proficiency in testing frameworks and libraries (Jest, vitest, Cypress, Storybook). ○ Strong debugging skills using tools like Chrome DevTools, Node.js debugger. ○ Expertise in using Git and platforms like GitHub, GitLab, or Bitbucket. ○ Understanding of web security best practices (OWASP). ○ Experience with authentication and authorization mechanisms (OAuth, JWT). ○ System Security, Scalability, System Performance experience Qualifications Bachelor's degree or equivalent experience in Computer Science or related field Development experience with programming languages SQL database or relational database skills
Posted 4 days ago
10.0 years
0 Lacs
India
Remote
Required Skills: Lead exp, DevOps, OCI, AWS Key Job Responsibilities: Lead, design and develop build and deployment solutions for JavaScript, .NET, MuleSoft and ERP applications using enterprise-level automation tools Lead, research, design, and implement strategies for continuous integration and continuous deployment (CICD) and release management Use automation to provision and maintain Amazon Web Services Cloud infrastructure Use automation to provision and maintain Oracle Cloud Infrastructure resources Build pipelines to compile and deploy code to target systems Build pipelines to manage configurations on target systems Setup integration between DevOps tools like GitHub, TeamCity, Octopus Deploy, New Relic, JIRA, and ServiceNow to enable automated processes for issues and change request deployments Research, develop, and implement best practices/methodologies for infrastructure provisioning (including Infrastructure as Code), application scaling, and configuration management Engineer systems and tools to support the build, integration, and verification of complex software systems spanning multiple hardware platforms, mobile platforms, and cloud-based platforms and services Work with the Information Services delivery team to implement and maintain highly scalable build and release solutions, including continuous delivery, optimization, monitoring, release management, and support for all Driscoll’s IS systems Manage Driscoll’s GitHub source code repositories for internal projects and vendor-developed systems Contribute to development and implementation of business continuity and disaster recovery processes Job Requirements: Minimum of Bachelor’s Degree in Software Engineering, Computer Science, or equivalent 10+ years of experience in DevOps Engineering 5+ years of experience leading DevOps teams Extensive experience with the Software Development Lifecycle, Branching and versioning strategies to enable continuous integration/deployment Environment and configuration management Familiarity with the software testing lifecycle and testing frameworks and processes is a plus Experience in Oracle Fusion Cloud ERP deployments Experience developing and maintaining build and deployment processes and scripting Extensive experience working with GitHub, a cloud-based Source Code Management tool Extensive experience with: CI tools (TeamCity, Jenkins), Package deployment tools (Octopus Deploy), Configuration Management tools (Terraform, Ansible Extensive experience in cloud platforms such as: Amazon Web Services (EC2, S3, CloudFormation; Glue, DynamoDB, Redshift are all plusses), Oracle Cloud Infrastructure (Oracle Saas and PaaS offerings, governance, OCI Networking); Azure experience is a plus Experience with monitoring tools (New Relic, Graphana) Experience with code quality and security tools (Snyk, SonarQube) Experience with JIRA for issue tracking and Service Now for incident and change management Strong programming/scripting skills (Python, Powershell, Bash) Advanced English communication skills with all levels of organization is required (written, verbal, digital, formal presentations)
Posted 4 days ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Hiring- Java Backend Developer (Java) with Cryptography Expertise & Cloud experience. Experience : Minimum 3 years as a java developer in payment domain. About the Job: We are seeking an exceptional Java SDK developer with a strong background in Java and expertise in cryptography, particularly in RSA, AES, and EC key generation and usage. The ideal candidate will have experience in developing secure and efficient cloud applications using Java. Responsibilities: •Design, develop, and maintain Java cloud applications. •Design REST/SOAP client APIs to communicate with frontend devices •Ensure Secure communication between client and server •Implement Secure layer in cloud to avoid any kind of DdOS or other attacks •Collaborate with cross-functional teams to define, design, and ship new fea-tures. •Integrate different features in system applications, ensuring seamless interac-tions. •Write well-designed, efficient, and testable code, adhering to best practices in cryptography. •Conduct unit testing and support system testing to ensure the security and relia-bility of the SDK. •Troubleshoot and debug Java applications, identifying and resolving issues re-lated to cryptography. •Participate in code reviews to maintain high code quality standards and ensure the secure implementation of cryptographic algorithms. •Develop and maintain technical documentation for the SDK. •Dev Ops experience is a plus Required Skills: •Strong expertise in Java programming language. •In-depth knowledge of cryptography, including RSA, AES, and EC key genera-tion and usage. •Proficiency in developing secure and efficient Java applications and design backend systems. •Strong understanding of Object-Oriented Programming (OOP) concepts. •Experience with unit testing and integration testing frameworks. •Good understanding of GitHub and version control systems. •Excellent problem-solving skills and attention to detail. •Strong communication and collaboration skills.
Posted 4 days ago
0.0 years
0 - 0 Lacs
Malappuram, Kerala
On-site
Flutter Developer Intern Company: Cookee Apps LLP Location: On-site (Kozhikode, Kerala, India) Job Type: Internship (Full-time, 6 Months) Schedule: Day shift About Us Cookee Apps LLP is a fast-growing software company that builds innovative web and mobile solutions. We’re passionate about mentoring fresh talent through real-world, hands-on training and support. Position Overview We are seeking a proactive and enthusiastic Flutter Developer Intern for a 6-month, full-time internship. You will gain practical experience building cross‑platform mobile applications using Flutter and Dart, working alongside our front-end, back-end, and UI/UX teams. Key Responsibilities Contribute to the development of mobile apps using Flutter and Dart (expertia.ai). Collaborate with design and backend teams to implement responsive UI/UX and integrate RESTful APIs . Write clean, maintainable, and efficient code. Participate in code reviews, troubleshooting, and bug-fixing to improve app stability and performance (in.indeed.com). Assist in writing unit tests and contribute to documentation. Stay updated with emerging mobile technologies and flutter best practices. Required Skills Strong fundamentals in Dart and Flutter development (expertia.ai, expertia.ai). Basic understanding of mobile development concepts (UI frameworks, state management, navigation). Familiarity with RESTful API integration and JSON parsing (expertia.ai). Proficiency with Git version control. Solid problem-solving abilities and attention to detail. Strong communication skills and collaborative mindset. Preferred Qualifications Pursuing or completed a degree/certification in Computer Science, Software Engineering, or related field. Portfolio or GitHub showcasing Flutter/Dart projects (academic, personal, or hackathon). Experience using state management solutions (e.g. Provider, BLoC, GetX). Exposure to unit testing in Flutter, CI/CD pipelines, or Firebase integration. What We Offer Internship Certificate upon successful completion. Letter of Recommendation for outstanding performers. Real-time exposure to industry-level codebases and agile development processes. Mentorship from senior developers and the possibility of a full-time role post-internship. Duration & Schedule 6 months full-time commitment Day shift , On-site at Kozhikode, Kerala How to Apply Submit your resume , GitHub portfolio , and a brief statement of interest to career@cookee.io Job Type: Internship Pay: ₹8,086.00 - ₹55,443.88 per month Work Location: In person
Posted 4 days ago
5.0 years
15 - 35 Lacs
Kerala, India
On-site
🚀 We’re Hiring: Senior DevOps Roles in Kerala 🚀 📍 Locations: Cochin / Trivandrum, Kerala 🕒 Experience: Analyst – 5+ years | Architect – 10+ years 📅 Onboarding: Analyst – Immediate | Architect – September 1, 2025 We’re building our Azure DevOps powerhouse and are looking for two key roles to join our team. Whether you’re a hands-on Analyst or an Architect-level strategist, this is your chance to shape modern CI/CD practices, drive automation, and enhance security at scale. 1️⃣ DevOps Analyst Key Skills: GitHub Actions CI/CD orchestration Azure Container Apps, Key Vault, Storage, Networking Snyk, SonarQube, IaC (Bicep/ARM/Terraform) Test automation & DevSecOps practices Jira integration, Cloudflare CDN, SAP Hybris CI/CD Docker deployments on Azure L3 troubleshooting & CI/CD optimization 2️⃣ DevOps Architect Key Skills: Enterprise-level CI/CD architecture with GitHub Actions Azure infrastructure design for microservices & container workloads Advanced security scanning (Snyk, SonarQube) & compliance IaC (Bicep/ARM/Terraform), Cloudflare CDN design SAP Hybris CI/CD automation Driving ownership-driven DevOps culture Mentoring teams & leading cross-functional collaboration 💡 Why Join Us? Work on cutting-edge Azure & DevSecOps solutions Collaborate with talented engineering, QA, and security teams Build & optimize pipelines for high availability, performance, and security 📩 Apply Now: nada@talentbasket.in Skills: devops,azure,ci
Posted 4 days ago
5.0 years
0 Lacs
Kerala, India
On-site
🚀 We’re Hiring: Senior DevOps Roles in Kerala 🚀 📍 Locations: Cochin / Trivandrum, Kerala 🕒 Experience: Analyst – 5+ years | Architect – 10+ years 📅 Onboarding: Analyst – Immediate | Architect – September 1, 2025 We’re building our Azure DevOps powerhouse and are looking for two key roles to join our team. Whether you’re a hands-on Analyst or an Architect-level strategist, this is your chance to shape modern CI/CD practices, drive automation, and enhance security at scale. 1️⃣ DevOps Analyst Key Skills: • GitHub Actions CI/CD orchestration • Azure Container Apps, Key Vault, Storage, Networking • Snyk, SonarQube, IaC (Bicep/ARM/Terraform) • Test automation & DevSecOps practices • Jira integration, Cloudflare CDN, SAP Hybris CI/CD • Docker deployments on Azure • L3 troubleshooting & CI/CD optimization 2️⃣ DevOps Architect Key Skills: • Enterprise-level CI/CD architecture with GitHub Actions • Azure infrastructure design for microservices & container workloads • Advanced security scanning (Snyk, SonarQube) & compliance • IaC (Bicep/ARM/Terraform), Cloudflare CDN design • SAP Hybris CI/CD automation • Driving ownership-driven DevOps culture • Mentoring teams & leading cross-functional collaboration 💡 Why Join Us? • Work on cutting-edge Azure & DevSecOps solutions • Collaborate with talented engineering, QA, and security teams • Build & optimize pipelines for high availability, performance, and security 📩 Apply Now: nada@talentbasket.in
Posted 4 days ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Senior Software Developer – Backend (Node.js) Exp:8+ Location: Chennai Notice Period: Immediate to 30days Skills:Node.js ,TypeScript, Express, RESTful API design, and asynchronous patterns • Build services in Node.js / TypeScript using Express • Translate product requirements into scalable, fault-tolerant designs • Lead technical design for new microservices and core APIs • Write clean, testable code with unit and integration tests (Jest, Playwright) • Model relational data in MySQL and PostgreSQL and optimize queries/indexes • Implement caching, sharding, or read replicas as data volumes grow • Containerize services with Docker and work with GitLab CI or Github Actions within established CI/CD pipelines • Perform thoughtful code reviews, drive adoption of best practices Must-Have Qualifications • Fluency in English, both written and spoken, for daily collaboration with distributed teams • 8+ years professional software engineering experience, with 3 + years focused on Node.js back-end development • Deep knowledge of TypeScript, Express, RESTful API design, and asynchronous patterns (Promises, async/await, streams) • Strong SQL skills and hands-on experience tuning MySQL or PostgreSQL for high concurrency • Production experience with Docker (build, compose, multi-stage images) and CI/CD pipelines (GitLab CI, GitHub Actions, or similar) • Proficiency with Git workflows and code review culture • Experience implementing caching strategies (e.g., Redis) • Passion for automated testing, clean architecture, and scalable design • Understanding of OAuth 2.0, JWT, and secure coding practices Nice-to-Have • Experience with TypeORM, NestJS or Fastify • Experience exposing or consuming GraphQL
Posted 4 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Summary We are looking for a skilled and experienced Testing Automation Engineer to join our QA team. The ideal candidate will be responsible for designing, developing, and maintaining automated test scripts to ensure the functionality, performance, and reliability of our software applications. You will collaborate with cross-functional teams to support a robust and efficient testing process. Key Responsibilities Design & Develop Automation Frameworks Build and maintain robust, reusable automation frameworks using tools like Java, Python, or C# . Automate Test Cases Develop automated test scripts for functional, regression, and integration testing scenarios. Maintain Test Scripts Continuously update and refine existing automation scripts to ensure high test coverage and relevancy. Collaborate with QA & Dev Teams Work closely with QA analysts and developers to define testing requirements and formulate automation strategies. CI/CD Integration Integrate automated tests into CI/CD pipelines to support fast, reliable delivery cycles. Debug & Troubleshoot Identify and fix issues in automation scripts, analyze test failures, and recommend improvements. Reporting & Documentation Generate detailed test reports, document test execution results, and maintain automation documentation. Skills & Qualifications Proficiency with automation tools such as Selenium, Appium, Cypress, QTP, RFT, Robot Framework, Worksoft, or Parasoft SOA . Strong programming/scripting knowledge in Java, Python, or C# . Experience in working with CI/CD tools such as Jenkins, Azure DevOps, or GitHub Actions . Sound understanding of software testing life cycle (STLC), defect lifecycle, and Agile methodologies. Ability to manage test data, create reusable components, and design scalable automation solutions. Excellent problem-solving skills and attention to detail. Bachelor's degree in Computer Science, Information Technology , or a related field. Preferred Qualifications (Nice to Have) Experience with API testing using tools like Postman or REST-assured . Familiarity with cloud platforms (AWS, Azure, or GCP). Exposure to service virtualization , mocking tools, or test data management tools.
Posted 4 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position: GCP Cloud Platform Enigneer Location: Pune Duration: Contract to Hire Job Description: GCP Core Service : IAM, VPC, GCE ( Google Compute Engine) , GCS ( Google Cloud Storage) , CloudSQL, MySQL, CI/CD Tool (Code Build/GitHub Action/), Other Tool : GitHub, Terraform, Shell Script, Ansible.
Posted 4 days ago
50.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Company: Our client is a French multinational information technology (IT) services and consulting company, headquartered in Paris, France. Founded in 1967, It has been a leader in business transformation for over 50 years, leveraging technology to address a wide range of business needs, from strategy and design to managing operations.The company is committed to unleashing human energy through technology for an inclusive and sustainable future, helping organizations accelerate their transition to a digital and sustainable world.They provide a variety of services, including consulting, technology, professional, and outsourcing services. Job Description : Job Title: Camunda BPM Location: Pune Experience: 6+ yrs. Employment Type: Contract to hire Work Mode: Hybrid Notice Period: - Immediate joiners Must have : 1.Camunda 7 2.Java 3.Front end exp - Angular • Relevant Exp: o Minimum 5 yrs experience in Business Process Management; o Minimum 4 yrs experience in Camunda 7 o Trained on Camunda 8 if possible • Job Summary: o Camunda BPM Developer who can work independently. o Hands on development, coding and debugging is a must; o Develops high-quality deliverables across all Camunda projects and provides guidance to the team on project assignments; o Work with very complex workflows, asynchronous tasks, user tasks, event listeners and Business Central deployments & APIs o Translate complex business requirements into technical specification using Camunda. o Collaborates with multiple teams of developers to implement project specifications, providing workflow support and technical guidance to less experienced team members. o Very good analytical, problem solving ability with excellent verbal and written communication skills. o Aware of Agile and SAFe way of working. Required skills: o Camunda V7 o Java o Front end exp - Angular o NodeJS - optional o REST o Microservice - Optional o Dockers, GitHub Actions, Cloud configurations
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |