Jobs
Interviews

14352 Orchestration Jobs - Page 15

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 3.0 years

10 - 12 Lacs

Pune, Maharashtra

On-site

Job description Experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with containerization and orchestration tools like Docker and Kubernetes. Knowledge of database management systems and data modeling. Experience with performance optimization and scalability considerations. Certifications in relevant technologies or methodologies. Relevant domain/technology certifications are desirable. Skills : Ionic Framework,Ios Swift,Java,Android Native Location : Pune(Onsite) Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,200,000.00 per year Benefits: Health insurance Provident Fund Application Question(s): Do you have experience on Ionic Framework? Education: Bachelor's (Preferred) Experience: total work: 3 years (Required) Location: Pune, Maharashtra (Required) Work Location: In person

Posted 2 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Charles Technologies is a fast-growing startup based in Chennai, building cutting-edge mobile applications that redefine user experiences. We are seeking a Senior Backend Developer to lead backend initiatives, mentor junior developers, and architect scalable systems that power our products. If you're a seasoned backend engineer with deep expertise in Golang or Node.js , and you're passionate about building data intensive, robust, high-performance systems, we’d love to connect! Key Responsibilities Architect, design, and implement scalable backend systems using Golang or Node.js. Lead technical discussions and decision-making across backend projects. Collaborate with cross-functional teams including frontend developers, product managers, and DevOps engineers. Ensure code quality through rigorous reviews, testing, and documentation. Optimize backend services for performance, reliability, and scalability. Design and maintain RESTful and GraphQL APIs. Manage and scale databases (SQL and NoSQL) effectively. Drive adoption of best practices in security, performance, and maintainability. Mentor junior developers and contribute to team growth and knowledge sharing. Required Skills & Qualifications 5+ years of professional experience in backend development with Golang or Node.js. Strong understanding of database technologies such as MongoDB, PostgreSQL, or MySQL. Experience with containerization tools like Docker and orchestration platforms like Kubernetes. Proficiency in designing and implementing RESTful APIs and microservices. Solid grasp of software architecture principles and design patterns. Familiarity with Git, CI/CD pipelines, and DevOps practices. Strong problem-solving skills and ability to work independently. Preferred Skills Experience with cloud platforms (Azure, AWS, GCP). Exposure to message brokers (Kafka, RabbitMQ). Familiarity with GraphQL and API gateway solutions. Understanding of security best practices in backend development. Experience with testing frameworks (Jest, Mocha, etc.). Interest or experience in frontend development (React).

Posted 2 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Velotio Technologies is a product engineering company working with innovative startups and enterprises. We are a certified and recognized as one of the best companies to work for in India. We have provided full-stack product development for 110+ startups across the globe building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 400+ elite software engineers solves hard technical problems while transforming customer ideas into successful products. About the Role: We are seeking an experienced Infrastructure Site Reliability Engineer (SRE) to join our team. This role is critical for ensuring the reliability, scalability, and performance of our infrastructure, particularly in managing and optimizing high-throughput data systems. You will work closely with engineering teams to design, implement, and maintain robust infrastructure solutions that meet our growing needs. As an Infrastructure SRE, you will be at the forefront of managing and optimizing our Kafka and OpenSearch clusters, AWS services, and multi-cloud environments. Your expertise will be key in ensuring the smooth operation of our infrastructure, enabling us to deliver high-performance and reliable services. This is an exciting opportunity to contribute to a dynamic team that is shaping the future of data observability and orchestration pipelines. Requirements Responsibilities- Kafka Management: Set up, manage, and scale Kafka clusters, including implementing and optimizing Kafka Streams and Connect for seamless data integration. Fine-tune Kafka brokers and optimize producer/consumer configurations to ensure peak performance OpenSearch Expertise: Configure and manage OpenSearch clusters, optimizing indexing strategies and query performance. Ensure high availability and fault tolerance through effective data replication and sharding. Set up monitoring and alerting systems to track cluster health AWS Services Proficiency: Manage AWS RDS instances, including provisioning, configuration, and scaling. Optimize database performance and ensure robust backup and recovery strategies. Deploy, manage, and scale Kubernetes clusters on AWS EKS, configuring networking and security policies, and integrating EKS with CI/CD pipelines for automated deployment Multi-Cloud Environment Management: Design and manage infrastructure across multiple cloud providers, ensuring seamless cloud networking and security. Implement disaster recovery strategies and optimize costs in a multi-cloud setup Linux Administration: Optimize Linux server performance, manage system resources, and automate processes using shell scripting. Apply best practices for security hardening and troubleshoot Linux-related issues effectively CI/CD Automation: Design and manage CI/CD pipelines using tools like Jenkins, GitLab CI, or CircleCI, and ArgoCD. Automate deployment processes, integrate with version control systems, and implement advanced deployment strategies like blue-green deployments, canary releases, and rolling updates. Ensure security and compliance within CI/CD processes Qualification- Bachelor's, Master's, or Doctorate in Computer Science or a related field Deep knowledge of Kafka, with hands-on experience in cluster setup, management, and performance tuning Expertise in OpenSearch cluster management, indexing, query optimization, and monitoring Proficiency with AWS services, particularly RDS and EKS, including experience in database management, performance tuning, and Kubernetes deployment Experience in managing multi-cloud environments, with a strong understanding of cloud networking, security, and cost optimization strategies Strong background in Linux administration, including system performance tuning, shell scripting, and security hardening Proficiency with CI/CD automation tools and best practices, with a focus on secure and compliant pipeline management Strong analytical and problem-solving skills, essential for troubleshooting complex technical challenges Benefits Our Culture : We have an autonomous and empowered work culture encouraging individuals to take ownership and grow quickly Flat hierarchy with fast decision making and a startup-oriented "get things done" culture A strong, fun, and positive environment with regular celebrations of our success. We pride ourselves in creating an inclusive, diverse, and authentic environment At Velotio, we embrace diversity. Inclusion is a priority for us, and we are eager to foster an environment where everyone feels valued. We welcome applications regardless of ethnicity or cultural background, age, gender, nationality, religion, disability or sexual orientation.

Posted 2 days ago

Apply

3.0 years

0 Lacs

New Delhi, Delhi, India

Remote

Company Description Cloudologic is a prominent cloud consulting and IT service provider based in Singapore with roots in India. The company specializes in cloud operations, cyber security, and managed services. With a decade of experience, Cloudologic is known for delivering high-quality services globally and is recognized as a trusted partner in the tech industry. Role Description This is a Full time remote role for an Ansible Engineer located in New Delhi. The Ansible Engineer will be responsible for day-to-day tasks related to computer science, back-end web development, software development, programming, and object-oriented programming (OOP). We are looking for a highly skilled Ansible Engineer with strong Linux expertise to join our infrastructure and automation team. The ideal candidate will be responsible for automating server provisioning, configuration management, and deployment tasks using Ansible in complex Linux environments. You will help drive infrastructure automation, scalability, and operational efficiency across our platforms. Key Responsibilities: Develop, manage, and maintain Ansible playbooks and roles for automating Linux system configurations, deployments, and patching. Perform Linux system administration tasks including setup, tuning, troubleshooting, and performance monitoring. Automate repetitive tasks and enforce configuration consistency across environments. Collaborate with DevOps, Security, and Development teams to streamline infrastructure workflows. Design and implement scalable, secure, and fault-tolerant systems in Linux-based environments . Integrate Ansible automation with CI/CD tools such as Jenkins , GitLab CI , or Azure DevOps . Use Ansible Tower or AWX for orchestration, role-based access control, and reporting. Maintain detailed documentation for system configurations and automation standards. Participate in incident response and root cause analysis related to configuration and system issues. Requirements 3+ years of hands-on experience with Linux system administration (Red Hat, CentOS, Ubuntu, etc.). 2+ years of experience working with Ansible (playbooks, roles, modules, templates). Proficient in writing shell scripts (Bash) and basic scripting in Python. Deep understanding of system services (systemd, networking, file systems, firewalls). Familiarity with Git and version control workflows. Experience with virtualization and cloud platforms (AWS, Azure, or GCP) is a plus. Knowledge of infrastructure security and hardening Linux environments. Strong troubleshooting, diagnostic, and problem-solving skills.

Posted 2 days ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Position Overview Job Title: DevOps Engineer Location: Pune, India Role Description DevOps Engineer with knowledge of CI/CD pipelines and Infrastructure provisioning. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Designing, building, and maintaining CI/CD pipelines for automated application deployments Infrastructure Provisioning Manage and optimize cloud resources Collaborate with development teams to streamline the software development lifecycle Monitor system performance and troubleshoot issues in development, testing, and production environments Software upgrades and maintenance Migration of Out of Support Application Software Your Skills And Experience Experience: 5 – 10 years Scripting (Python, Bash) Version Control (GIT) CI/CD Tools (Jenkins) Cloud Platforms (AWS, Azure, or Google Cloud) Containerization and Orchestration: Experience with Docker and Kubernetes. Monitoring and Logging: Familiarity with monitoring and logging tools Networking: Understanding of networking concepts (e.g., TCP/IP, DNS). Operating Systems: Knowledge of Linux and Windows operating systems. Databases: Familiarity with databases and SQL. Know-how with cloud-based infrastructure How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 2 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Position Overview Job Title: DevOps Engineer Location: Pune, India Role Description DevOps Engineer with knowledge of CI/CD pipelines and Infrastructure provisioning. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Designing, building, and maintaining CI/CD pipelines for automated application deployments Infrastructure Provisioning Manage and optimize cloud resources Collaborate with development teams to streamline the software development lifecycle Monitor system performance and troubleshoot issues in development, testing, and production environments Software upgrades and maintenance Migration of Out of Support Application Software Your Skills And Experience Experience: 2 – 5 years Scripting (Python, Bash) Version Control (GIT) CI/CD Tools (Jenkins) Cloud Platforms (AWS, Azure, or Google Cloud) Containerization and Orchestration: Experience with Docker and Kubernetes. Monitoring and Logging: Familiarity with monitoring and logging tools Networking: Understanding of networking concepts (e.g., TCP/IP, DNS). Operating Systems: Knowledge of Linux and Windows operating systems. Databases: Familiarity with databases and SQL. Know-how with cloud-based infrastructure How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 2 days ago

Apply

12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Due to growth and customer demand, CommScope is looking to add a Sr. Technical Marketing Manager to our RUCKUS Networks’ Technical Marketing team at Bangalore. Expert level technical knowledge/expertise of Wired (Switching/Routing) and Wi-Fi Enterprise Networks is a must. As a hands-on Senior Technical Marketing Manager, you will be lead or participate in: Strategic Planning: Develop comprehensive technical marketing strategies and plans to align with our strategic objectives. Technical Content Development: Create compelling technical content including but not limited to reference architectures, best practice guides, feature notes, technical briefs, presentations, training decks and demo videos that effectively showcase the features, benefits, and competitive advantages of our products and solutions. Competitive Analysis: Conduct feature, product, and solution-level in-depth competitive analysis to identify and demonstrate our differentiators from our competitors. Pre-Sales/SE Enablement: Collaborate with the SE team to develop SE enablement tools and resources such as technical training materials, competitive battle cards, solution guides, and customer-facing technical presentations to support the sales engineering process and drive revenue growth. SME: Act as a subject matter expert on campus networking technologies, providing guidance and support to SE teams and customers during pre-sales and post-sales activities. Technical Evangelism: Represent the company as a subject matter expert at industry events, conferences, webinars, and customer meetings. Deliver technical presentations and demonstrations to showcase the capabilities and value proposition of our networking solutions. Author technical blogs, participate in technical podcasts, and more. Product Launch Support: Lead the technical marketing aspects of product launches including developing launch plans, creating launch collateral, coordinating launch activities, and ensuring successful execution. Customer Engagement: Engage with customers to understand their networking challenges, gather feedback on our products, and communicate customer success stories and case studies. Cross-Functional Collaboration: Work closely with product management, engineering, sales, and marketing teams to align technical marketing initiatives with overall business objectives and ensure consistency in messaging and positioning. Support or conduct POC tests (Proof of Concept) for RUCKUS products and write reports. Support the field teams and partners in responding with formal responses to documents such as RFI/RFQ/RFP. Experience & Expertise Minimum 12+ years in technical marketing, with a proven track record driving strategy and enablement across networking solutions Strong foundation in Wireless/RF , Switching/Routing , and Secure Access Protocols , including: 802.11a/b/g/n/ac/ax/be , 802.1x , VLANs , STP , OSPF , RIP , BGP , Multicast , MPLS Technical Mastery Campus Networking Technologies Wi-Fi and RF Engineering Layer 2/3 Switching NGFW/UTM Firewalls & SD-WAN Gateway Solutions Cloud Networking & Infrastructure Network Analytics & Performance Optimization Campus Network Security Network Access Control (NAC) End-to-End Network Design Automation & Orchestration Tools IoT Integration AI & Automation Proficiency Working knowledge of AI/ML and AIOps frameworks Hands-on experience with scripting languages : Python, Ansible, Bash Familiarity with network diagnostic tools such as Wireshark, Chariot, and similar Production-level experience using AI and Large Language Model tools Content Development & Enablement Proven ability to develop high-quality technical documentation , presentations, and training guides Comfortable creating video content for product education and enablement Strong grasp of the sales cycle and ability to translate technical value into business impact Skilled in configuring complex WAN, LAN, and WLAN environments Adept at arming sales engineers and partners with competitive intelligence Leadership & Communication Exceptional communication and listening skills Confident in presenting to CXO-level stakeholders and engaging in customer-facing events and seminars

Posted 2 days ago

Apply

6.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

JOB DESCRIPTION: DATA ENGINEER (Databricks & AWS) Overview: As a Data Engineer, you will work with multiple teams to deliver solutions on the AWS Cloud using core cloud data engineering tools such as Databricks on AWS, AWS Glue, Amazon Redshift, Athena, and other Big Data-related technologies. This role focuses on building the next generation of application-level data platforms and improving recent implementations. Hands-on experience with Apache Spark (PySpark, SparkSQL), Delta Lake, Iceberg, and Databricks is essential. Locations: Jaipur, Pune, Hyderabad, Bangalore, Noida. Responsibilities: • Define, design, develop, and test software components/applications using AWS-native data services: Databricks on AWS, AWS Glue, Amazon S3, Amazon Redshift, Athena, AWS Lambda, Secrets Manager • Build and maintain ETL/ELT pipelines for both batch and streaming data. • Work with structured and unstructured datasets at scale. • Apply Data Modeling principles and advanced SQL techniques. • Implement and manage pipelines using Apache Spark (PySpark, SparkSQL) and Delta Lake/Iceberg formats. • Collaborate with product teams to understand requirements and deliver optimized data solutions. • Utilize CI/CD pipelines with DBX and AWS for continuous delivery and deployment of Databricks code. • Work independently with minimal supervision and strong ownership of deliverables. Must Have: • 6+ years of experience in Data Engineering on AWS Cloud. • Hands-on expertise in: o Apache Spark (PySpark, SparkSQL) o Delta Lake / Iceberg formats o Databricks on AWS o AWS Glue, Amazon Athena, Amazon Redshift • Strong SQL skills and performance tuning experience on large datasets. • Good understanding of CI/CD pipelines, especially using DBX and AWS tools. • Experience with environment setup, cluster management, user roles, and authentication in Databricks. • Certified as a Databricks Certified Data Engineer – Professional (mandatory). Good To Have: • Experience migrating ETL pipelines from on-premise or other clouds to AWS Databricks. • Experience with Databricks ML or Spark 3.x upgrades. • Familiarity with Airflow, Step Functions, or other orchestration tools. • Experience integrating Databricks with AWS services in a secured, production-ready environment. • Experience with monitoring and cost optimization in AWS. Key Skills: • Languages: Python, SQL, PySpark • Big Data Tools: Apache Spark, Delta Lake, Iceberg • Databricks on AWS • AWS Services: AWS Glue, Athena, Redshift, Lambda, S3, Secrets Manager • Version Control & CI/CD: Git, DBX, AWS CodePipeline/CodeBuild • Other: Data Modeling, ETL Methodology, Performance Optimization

Posted 2 days ago

Apply

0 years

0 Lacs

India

Remote

Company Description ThreatXIntel is a startup cyber security company specializing in protecting businesses and organizations from cyber threats. Our tailored services include cloud security, web and mobile security testing, cloud security assessment, and DevSecOps. We prioritize delivering affordable solutions that cater to the specific needs of our clients, regardless of their size. Our proactive approach to security involves continuous monitoring and testing to identify vulnerabilities before they can be exploited. Role Description We are seeking an experienced GCP Data Engineer for a contract engagement focused on building, optimizing, and maintaining high-scale data processing pipelines using Google Cloud Platform services . You’ll work on designing robust ETL/ELT solutions, transforming large data sets, and enabling analytics for critical business functions. This role is ideal for a hands-on engineer with strong expertise in BigQuery , Cloud Composer (Airflow) , Python , and Cloud SQL/PostgreSQL , with experience in distributed data environments and orchestration tools. Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL/ELT workflows using GCP Composer (Apache Airflow) Work with BigQuery , Cloud SQL , and PostgreSQL to manage and optimize data storage and retrieval Build automation scripts and data transformations using Python (PySpark knowledge is a strong plus) Optimize queries for large-scale, distributed data processing systems Collaborate with cross-functional teams to translate business and analytics requirements into scalable technical solutions Support data ingestion from multiple structured and semi-structured sources including Hive , MySQL , and NoSQL databases Apply HDFS and distributed file system experience where necessary Ensure data quality, reliability, and consistency across platforms Provide ongoing maintenance and support for deployed pipelines and services Required Qualifications Strong hands-on experience with GCP services , particularly: BigQuery Cloud Composer (Apache Airflow) Cloud SQL / PostgreSQL Proficiency in Python for scripting and data pipeline development Experience in designing & optimizing high-volume data processing workflows Good understanding of distributed systems , HDFS , and parallel processing frameworks Strong analytical and problem-solving skills Ability to work independently and collaborate across remote teams Excellent communication skills for technical and non-technical audiences Preferred Skills Knowledge of PySpark for big data processing Familiarity with Hive , MySQL , and NoSQL databases Experience with Java in a data engineering context Exposure to data governance, access control, and cost optimization on GCP Prior experience in a contract or freelance capacity with enterprise clients

Posted 2 days ago

Apply

8.0 years

0 Lacs

India

On-site

About Company Our client is a trusted global innovator of IT and business services. We help clients transform through consulting, industry solutions, business process services, digital & IT modernisation and managed services. Our client enables them, as well as society, to move confidently into the digital future. We are committed to our clients’ long-term success and combine global reach with local client attention to serve them in over 50 countries around the globe. Job Title: ServiceNow ITSM/ITOM Lead Consultant Location: Pan India Experience: 8+ yrs Job Type : Contract to hire Notice Period:- Immediate joiner Mandatory Skills · Hands-on experience with ServiceNow ITSM/ITOM Lead Consultant. • 8+ years of experience with ServiceNow, with at least 4 years in an architect role. • Strong hands-on expertise in ITSM and ITOM modules, including CMDB design and maintenance. • Experience with scripting languages (e.g., JavaScript, AngularJS) and development on the ServiceNow platform. • Proficiency in MID Server configuration, ServiceNow Discovery, and Service Mapping. • Experience with Event Management, Orchestration, and Cloud Management integrations. Responsibilities Writing clean, high-quality, high-performance, maintainable code Develop and support software including applications, database integration, interfaces, and new functionality enhancements Coordinate cross-functionally to insure project meets business objectives and compliance standards Support test and deployment of new products and features Participate in code reviews. Qualifications Bachelor's degree in Computer Science (or related field)

Posted 2 days ago

Apply

6.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Summary We are seeking a highly experienced Lead DevOps Engineer to drive the strategy, design, and implementation of our DevOps infrastructure across cloud and on-premises environments. This role requires strong leadership and hands-on expertise in AWS, Azure DevOps, and Google Cloud Platform (GCP), along with deep experience in automation, CI/CD, container orchestration, and system scalability. As a technical leader, you will mentor DevOps engineers, collaborate with cross-functional teams, and establish best practices to ensure reliable, secure, and scalable infrastructure that supports our product lifecycle. Key Responsibilities: Oversee the design, implementation, and maintenance of scalable and secure infrastructure on cloud and on-premises environments cost effectively Implement and manage infrastructure as code (IaC) using tools like Terraform or CloudFormation Manage and optimize CI/CD pipelines to accelerate development cycles and ensure seamless deployments. Implement robust monitoring solutions to proactively identify and resolve issues. Lead incident response efforts to minimize downtime and impact on clients. Develop and implement automation strategies to streamline deployment, monitoring, and maintenance processes. Mentor and guide junior/mid-level DevOps engineers, fostering a culture of learning and accountability. Collaborate with software developers, quality assurance engineers and IT professionals to guarantee smooth deployment, automation and management of software infrastructure. Ensure high standards for security, compliance, and data protection across the infrastructure. Stay up to date with industry trends and emerging technologies, assessing their potential impact and recommending adoption where appropriate. Maintain comprehensive documentation of systems, processes, and procedures to support knowledge sharing and team efficiency. Required Skills and Qualifications 6+ years of hands-on experience in DevOps, infrastructure, or related roles Strong knowledge of cloud platforms including Azure, AWS and GCP Proven experience in containerization using Docker and Kubernetes. Advanced knowledge of Linux systems and networking Strong experience with CI/CD tools like Jenkins, GitHub Actions, Bitbucket Pipeline, TeamCity Solid experience in designing, implementing, and maintaining CI/CD pipelines for automated build, test, and deployment processes Deep understanding of automation, scripting, and Infrastructure as Code (IaC) with Terraform and Ansible. Strong problem-solving and troubleshooting skills, with the ability to identify root causes and implement effective solutions. Excellent leadership, team building, and communication skills Bachelor’s degree in computer science, IT, Engineering, or equivalent practical experience Preferred Skills and Qualifications Relevant certifications (e.g., AWS Certified DevOps Engineer – Professional, GCP DevOps Engineer, or Azure Solutions Architect) Experience working in fast-paced product environments Knowledge of security best practices and compliance standards Key Competencies Leadership and mentoring capabilities in technical teams Strong strategic thinking and decision-making skills Ability to manage multiple priorities in a deadline-driven environment Passion for innovation, automation, and continuous improvement Clear, proactive communication and collaboration across teams Why Join Us At Admaren, we are transforming the maritime domain with state-of-the-art technology. As a Lead DevOps Engineer , you will be at the helm of infrastructure innovation, driving mission-critical systems that support global operations. You’ll have the autonomy to implement cutting-edge practices, influence engineering culture, and grow with a team committed to excellence. Join us to lead from the front and shape the future of maritime software systems.

Posted 2 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Data Engineer Experience: 7+ years Location: Gurugram (On-Site) Job Type: Full-Time Job Description: Skills: Data warehouse, Architectural Patterns, Modern data engineering tools and framework, AWS, SQL, File Formats A seasoned Data Engineer with a minimum of 7+ years of experience. Deep experience in designing and building robust, scalable data pipelines – both batch and real-time using modern data engineering tools and frameworks. Proficiency in AWS Data Services (S3, Glue, Athena, EMR, Kinesis etc.). Strong grip on SQL queries, various file formats like Apache Parquet, Delta Lake, Apache Iceberg or Hudi and CDC patterns. Experience in stream processing frameworks like Apache Flink or Kafka Streams or any other distributed data processing frameworks like pySpark. Expertise in workflow orchestration using Apache Airflow. Strong analytical and problem-solving skills, with the ability to work independently in a fast-paced environment. In-depth knowledge of database systems (both relational and NoSQL) and experience with data warehousing concepts. Hands-on experience with data integration tools and a strong familiarity with cloud-based data warehousing and processing is highly desirable. Excellent communication and interpersonal skills, facilitating effective collaboration with both technical and non-technical stakeholders. A strong desire to stay current with emerging technologies and industry best practices in the data landscape.

Posted 2 days ago

Apply

8.0 years

0 Lacs

India

Remote

Job Title: Quant Engineer Location: Remote Quant Engineer Job Description: Strong Python developer with up-to-date skills, including web development, cloud (ideally Azure), Docker, testing , devops (ideally terraform + github actions). Data engineering (pyspark, lakehouses, kafka) is a plus. Good understanding of maths, finance as role interacts with quant devs, analysts and traders. Familiarity with e.g. PnL, greeks, volatility, partial derivative, normal distribution etc. Financial and/or trading exposure is nice to have, particularly energy commodities Productionise quant models into software applications, ensuring robust day to day operation, monitoring and back testing are in place Translate trader or quant analyst’s need into software product requirements Prototype and implement data pipelines Co-ordinate closely with analysts and quants during development of models, acting as a technical support and coach Produce accurate, performant, scalable, secure software, and support best practices following defined IT standards Transform proof of concepts into a larger deployable product in Shell and outside. Work in a highly-collaborative, friendly Agile environment, participate in Ceremonies and Continuous Improvement activities. Ensuring that documentation and explanations of results of analysis or modelling are fit for purpose for both a technical and non-technical audience Mentor and coach other teammates who are upskilling in Quants Engineering Professional Qualifications & Skills Educational Qualification Graduation / postgraduation /PhD with 8+ years’ work experience as software developer /data scientist. Degree level in STEM, computer science, engineering, mathematics, or a relevant field of applied mathematics. Good understanding of Trading terminology and concepts (incl. financial derivatives), gained from experience working in a Trading or Finance environment. Required Skills Expert in core Python with Python scientific stack / ecosystem (incl pandas, numpy, scipy, stats), and a second strongly typed language (e.g.: C#, C++, Rust or Java). Expert in application design, security, release, testing and packaging. Mastery of SQL / no-SQL databases, data pipeline orchestration tools. Mastery of concurrent/distributed programming and performance optimisation methods

Posted 2 days ago

Apply

5.0 years

0 Lacs

India

Remote

Network Automation Engineer Location:Remote work, in India Duration:3 months+ with potential for CTH Job Overview: We are seeking a highly skilled and experienced Senior Network Automation Engineer with expertise in SALT Stack , Python , Ansible , CI/CD pipelines , GIT , and Jenkins to join our dynamic team. This role will focus on developing and maintaining network automation solutions to enhance the efficiency, scalability, and reliability of our network infrastructure. The ideal candidate will have extensive experience in automating complex network configurations, troubleshooting issues, and working with cross-functional teams to ensure seamless network operations. Required Skills and Qualifications: Bachelor’s degree in Computer Science, Information Technology, Networking, or related field (or equivalent work experience). 5+ years of experience in network engineering or network automation. Expert-level proficiency in SALT Stack for automation, configuration management, and orchestration of network infrastructure. Strong Python programming skills for network automation and tool development. Hands-on experience with Ansible for automating network configurations and system deployments. Solid understanding of CI/CD pipelines , including tools like Jenkins , Git , and associated best practices for automated testing and deployment in network environments. Experience with networking protocols (e.g., TCP/IP, BGP, OSPF, DNS, etc.) and configuring network devices (routers, switches, firewalls). Strong troubleshooting skills in network automation and infrastructure environments. Familiarity with cloud environments (AWS, Azure, etc.) and virtualized networks is a plus. Excellent communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications: Certification in Cisco (CCNP, CCIE) , Juniper (JNCIP, JNCIE) , or similar network automation-related certifications. Familiarity with containerization technologies like Docker or Kubernetes in the context of network automation.

Posted 2 days ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About The Role We are looking for a passionate and skilled Full Stack Developer with strong experience in React.js , Node.js , and AWS Lambda to build a custom enterprise platform that interfaces with a suite of SDLC tools. This platform will streamline tool administration, automate provisioning and deprovisioning of access, manage licenses, and offer centralized dashboards for governance and monitoring. Required Skills & Qualifications 4–6 years of hands-on experience as a Full Stack Developer Proficient in React.js and component-based front-end architecture Strong backend experience with Node.js and RESTful API development Solid experience with AWS Lambda, API Gateway, DynamoDB, S3, etc. Prior experience integrating and automating workflows for SDLC tools like: JIRA, Jenkins, GitLab, Bitbucket, GitHub, SonarQube, etc. Understanding of OAuth2, SSO, and API key-based authentications Familiarity with CI/CD pipelines, microservices, and event-driven architectures Strong knowledge of Git and modern development practices Good problem-solving skills, and ability to work independently Nice To Have Experience with Infrastructure-as-Code (e.g., Terraform, CloudFormation) Experience with AWS EventBridge, Step Functions, or other serverless orchestration tools Knowledge of enterprise-grade authentication (LDAP, SAML, Okta) Familiarity with monitoring/logging tools like CloudWatch, ELK, or DataDog

Posted 2 days ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana

Remote

Senior Software Engineer Hyderabad, Telangana, India Date posted Jul 31, 2025 Job number 1854515 Work site Up to 50% work from home Travel 0-25 % Role type Individual Contributor Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview The Infrastructure and Developer Platform (IDP) team within the Microsoft Threat Protection (MTP) organization builds and maintains the infrastructure and developer platform that almost all Defender products (Defender for Endpoint, Defender for Identity, etc.) rely on. This platform allows engineers across MTP to more easily deploy their services, lower costs, and increase security and reliability across our fleet. This position will specifically focus on Azure Kubernetes Security within the organization. The IDP team is at the heart of Microsoft's security infrastructure, providing the essential tools and frameworks that empower our engineers to innovate and deliver cutting-edge security solutions. Our platform is designed to streamline the deployment process, enhance cost-efficiency, and bolster the security and reliability of our services. By leveraging the latest technologies and best practices, we ensure that our Defender products operate seamlessly and securely, protecting millions of users worldwide. Qualifications Qualifications Required – Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. 8+ years of professional software engineering experience, with a strong track record of delivering production-grade distributed systems. Deep expertise in Kubernetes, including cluster architecture, workload orchestration, and security hardening e.g. RBAC, Workload Idenrtity, Container Runtime Security. Hands-on experience with containers (Docker, containerd) and container orchestration platforms (AKS, EKS, GKE). Proficiency in one or more programming languages such as Go, C++, C, Python, or Java. Experience building and operating cloud-native services on Azure or other public cloud platforms. Strong understanding of site reliability engineering (SRE) principles, including observability, incident response, and automation. Ability to work effectively with cross-functional teams and manage multiple priorities. Qualifications: Other Requirements – CKA, CKS, CISSP, or other relevant security and Kubernetes certifications. Experience building/creating Kubernetes Operators etc. with large-scale AKS deployments, ideally in enterprise or hyperscale environments Experience with security frameworks such as NIST, CIS Benchmarks, and PCI-DSS, and ability to assess and mitigate risks in Kubernetes environments. Familiarity with Linux internals, networking, and kernel-level container security. Responsibilities Serve as a Kubernetes subject matter expert, driving architecture, design, and implementation of scalable, secure, and resilient AKS-based solutions. Design and implement cloud-native security solutions using Azure technologies, with a focus on container runtime protection, policy enforcement, and threat detection. Own and deliver production-grade services with high availability, reliability, and performance across global AKS deployments. Develop and maintain CI/CD pipelines, secure build systems, and automated testing frameworks tailored for Kubernetes workloads. Drive observability and telemetry improvements, including logging, monitoring, alerting, and incident response for services. Identify and implement innovative approaches to secure Kubernetes workloads at scale, including leveraging AI/ML for anomaly detection. Contribute to strategic initiatives that shape Microsoft’s container security roadmap and influence industry best practices. Mentor junior engineers and contribute to engineering culture through code reviews, design discussions, and knowledge sharing. Demonstrate ownership and accountability for end-to-end delivery of features and services. Exhibit growth mindset by continuously learning and adapting to new technologies, threats, and customer needs. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 2 days ago

Apply

1.0 years

0 Lacs

Pune, Maharashtra

On-site

COMPANY OVERVIEW Domo's AI and Data Products Platform lets people channel AI and data into innovative uses that deliver a measurable impact. Anyone can use Domo to prepare, analyze, visualize, automate, and build data products that are amplified by AI. POSITION SUMMARY As a DevOps Engineer in Pune, India at Domo, you will play a crucial role in designing, implementing, and maintaining scalable and reliable infrastructure to support our data-driven platform. You will collaborate closely with engineering, product, and operations teams to streamline deployment pipelines, improve system reliability, and optimize cloud environments. If you thrive in a fast-paced environment and have a passion for automation, optimization, and software development, we want to hear from you! KEY RESPONSIBILITIES Design, build, and maintain scalable infrastructure using cloud platforms (AWS, GCP, or Azure) Develop and manage CI/CD pipelines to enable rapid and reliable deployments Automate provisioning, configuration, and management of infrastructure using tools like Terraform, Ansible, Salt, or similar Develop and maintain tooling to automate, facilitate, and monitor operational tasks Monitor system health and performance, troubleshoot issues, and implement proactive solutions Collaborate with software engineers to improve service scalability, availability, and security Lead incident response and post-mortem analysis to ensure service reliability Drive DevOps best practices and continuous improvement initiatives across teams JOB REQUIREMENTS 3+ years of experience in DevOps, Site Reliability Engineering, or infrastructure engineering roles 1+ years working in a SaaS environment Bachelor’s degree in Computer Science, Software Engineering, Information Technology, or a related field Expertise in cloud platforms such as AWS, GCP, or Azure. Certifications preferred. Strong experience with infrastructure as code (Terraform, CloudFormation, etc.) Proficiency in automation and configuration management tools such as Ansible and Salt Hands-on experience with containerization (Docker) and orchestration (Kubernetes) Solid understanding of CI/CD tools (Jenkins, GitHub Actions, etc.) and processes Strong scripting skills in Python, Bash, or similar languages Experience developing applications or tools using Java, Python, or similar programming languages Familiarity with Linux system administration and troubleshooting Experience with version control systems, particularly GitHub Experience with monitoring and logging tools (Prometheus, Grafana, ELK stack, Datadog) Knowledge of networking, security best practices, and cost optimization on cloud platforms Excellent communication and collaboration skills LOCATION: Pune, Maharashtra, India INDIA BENEFITS & PERKS Medical insurance provided Maternity and paternity leave policies Baby bucks: a cash allowance to spend on anything for every newborn or child adopted “Haute Mama”: cash allowance for maternity wardrobe benefit (only for women employees) Annual leave of 18 days + 10 holidays + 12 sick leaves Sodexo Meal Pass Health and Wellness Benefit One-time Technology Benefit: cash allowance towards the purchase of a tablet or smartwatch Corporate National Pension Scheme Employee Assistance Programme (EAP) Marriage leaves up to 3 days Bereavement leaves up to 5 days Domo is an equal opportunity employer. #LI-PD1 #LI-Hybrid

Posted 2 days ago

Apply

4.0 - 8.0 years

0 Lacs

Pune, Maharashtra

On-site

DataPune Posted On 31 Jul 2025 End Date 31 Dec 2025 Required Experience 4 - 8 Years Basic Section Grade Role Senior Software Engineer Employment Type Full Time Employee Category Organisational Group Company NewVision Company Name New Vision Softcom & Consultancy Pvt. Ltd Function Business Units (BU) Department/Practice Data Department/Practice Data Engineering Region APAC Country India Base Office Location Pune Working Model Hybrid Weekly Off Pune Office Standard State Maharashtra Skills Skill AZURE DATABRICKS Highest Education GRADUATION/EQUIVALENT COURSE CERTIFICATION DP-201: DESIGNING AN AZURE DATA SOLUTION DP-203T00: DATA ENGINEERING ON MICROSOFT AZURE Working Language ENGLISH Job Description Position Summary: We are seeking a talented Databricks Data Engineer with a strong background in data engineering to join our team. You will play a key role in designing, building, and maintaining data pipelines using a variety of technologies, with a focus on the Microsoft Azure cloud platform. Responsibilities: Design, develop, and implement data pipelines using Azure Data Factory (ADF) or other orchestration tools. Write efficient SQL queries to extract, transform, and load (ETL) data from various sources into Azure Synapse Analytics. Utilize PySpark and Python for complex data processing tasks on large datasets within Azure Databricks. Collaborate with data analysts to understand data requirements and ensure data quality. Hands-on experience in designing and developing Datalakes and Warehouses Implement data governance practices to ensure data security and compliance. Monitor and maintain data pipelines for optimal performance and troubleshoot any issues. Develop and maintain unit tests for data pipeline code. Work collaboratively with other engineers and data professionals in an Agile development environment. Preferred Skills & Experience: Good knowledge of PySpark & working knowledge of Python Full stack Azure Data Engineering skills (Azure Data Factory, DataBricks and Synapse Analytics) Experience with large dataset handling Hands-on experience in designing and developing Datalakes and Warehouses New Vision is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees

Posted 2 days ago

Apply

2.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Expedia Group brands power global travel for everyone, everywhere. We design cutting-edge tech to make travel smoother and more memorable, and we create groundbreaking solutions for our partners. Our diverse, vibrant, and welcoming community is essential in driving our success. Why Join Us? To shape the future of travel, people must come first. Guided by our Values and Leadership Agreements, we foster an open culture where everyone belongs, differences are celebrated and know that when one of us wins, we all win. We provide a full benefits package, including exciting travel perks, generous time-off, parental leave, a flexible work model (with some pretty cool offices), and career development resources, all to fuel our employees' passion for travel and ensure a rewarding career journey. We’re building a more open world. Join us. Why Join Us? Are you an technologist who is passionate about building robust, scalable, and performant applications & data products? This is exactly what we do, join Data Engineering & Tooling Team! Data Engineering & Tooling Team (part of Enterprise Data Products at Expedia) is responsible for making traveler, partner & supply data accessible, unlocking insights and value! Our Mission is to build and manage the travel industry's premier Data Products and SDKs. Software Development Engineer II Introduction to team Our team is looking for an Software Engineer who applies engineering principles to build & improve existing systems. We follow Agile principles, and we're proud to offer a dynamic, diverse and collaborative environment where you can play an impactful role and build your career. Would you like to be part of a Global Tech company that does Travel? Don't wait, Apply Now! In this role, you will Implement products and solutions that are highly scalable with high-quality, clean, maintainable, optimized, modular and well-documented code across the technology stack. [OR - Writing code that is clean, maintainable, optimized, modular.] Crafting API's, developing and testing applications and services to ensure they meet design requirements. Work collaboratively with all members of the technical staff and other partners to build and ship outstanding software in a fast-paced environment. Applying knowledge of software design principles and Agile methodologies & tools. Resolve problems and roadblocks as they occur with help from peers or managers. Follow through on details and drive issues to closure. Assist with supporting production systems (investigate issues and working towards resolution). Experience and qualifications: Bachelor's degree or Masters in Computer Science & Engineering, or a related technical field; or equivalent related professional experience. 2+ years of software development or data engineering experience in an enterprise-level engineering environment. Proficient with Object Oriented Programming concepts with a strong understanding of Data Structures, Algorithms, Data Engineering (at scale), and Computer Science fundamentals. Experience with Java, Scala, Spring framework, Micro-service architecture, Orchestration of containerized applications along with a good grasp of OO design with strong design patterns knowledge. Solid understanding of different API types (e.g. REST, GraphQL, gRPC), access patterns and integration. Prior knowledge & experience of NoSQL databases (e.g. ElasticSearch, ScyllaDB, MongoDB). Prior knowledge & experience of big data platforms, batch processing (e.g. Spark, Hive), stream processing (e.g. Kafka, Flink) and cloud-computing platforms such as Amazon Web Services. Knowledge & Understanding of monitoring tools, testing (performance, functional), application debugging & tuning. Good communication skills in written and verbal form with the ability to present information in a clear and concise manner. Accommodation requests If you need assistance with any part of the application or recruiting process due to a disability, or other physical or mental health conditions, please reach out to our Recruiting Accommodations Team through the Accommodation Request. We are proud to be named as a Best Place to Work on Glassdoor in 2024 and be recognized for award-winning culture by organizations like Forbes, TIME, Disability:IN, and others. Expedia Group's family of brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Vrbo®, trivago®, Orbitz®, Travelocity®, Hotwire®, Wotif®, ebookers®, CheapTickets®, Expedia Group™ Media Solutions, Expedia Local Expert®, CarRentals.com™, and Expedia Cruises™. © 2024 Expedia, Inc. All rights reserved. Trademarks and logos are the property of their respective owners. CST: 2029030-50 Employment opportunities and job offers at Expedia Group will always come from Expedia Group’s Talent Acquisition and hiring teams. Never provide sensitive, personal information to someone unless you’re confident who the recipient is. Expedia Group does not extend job offers via email or any other messaging tools to individuals with whom we have not made prior contact. Our email domain is @expediagroup.com. The official website to find and apply for job openings at Expedia Group is careers.expediagroup.com/jobs. Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

Posted 2 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Us Welcome to FieldAssist, where Innovation meets excellence!! We are a top-tier SaaS platform that specializes in optimizing Route-to-Market strategies and enhancing brand relationships within the CPG partner ecosystem. With over 1,00,000 sales users representing over 600+ CPG brands across 10+ countries in South East Asia, the Middle East, and Africa, we reach 10,000 distributors and 7.5 million retail outlets every day. FieldAssist is a 'Proud Partner to Great Brands' like Godrej Consumers, Saro Africa, Danone, Tolaram, Haldiram’s, Eureka Forbes, Bisleri, Nilon’s, Borosil, Adani Wilmar, Henkel, Jockey, Emami, Philips, Ching’s and Mamaearth among others. Do you crave a dynamic work environment where you can excel and enjoy the journey? We have the perfect opportunity for you!! Responsibilities Build and maintain robust backend services and REST APIs using Python (Django, Flask, or FastAPI). Develop end-to-end ML pipelines including data preprocessing, model inference, and result delivery. Integrate and scale AI/LLM models, including RAG (Retrieval Augmented Generation) and intelligent agents. Design and optimize ETL pipelines and data workflows using tools like Apache Airflow or Prefect. Work with Azure SQL and Cosmos DB for transactional and NoSQL workloads. Implement and query vector databases for similarity search and embedding-based retrieval (e.g., Azure Cognitive Search, FAISS, or Pinecone). Deploy services on Azure Cloud, using Docker and CI/CD practices. Collaborate with cross-functional teams to bring AI features into product experiences. Write unit/integration tests and participate in code reviews to ensure high code quality. e and maintain applications using the .NET platform and environment Who we're looking for: Strong command of Python 3.x, with experience in Django, Flask, or FastAPI. Experience building and consuming RESTful APIs in production systems. Solid grasp of ML workflows, including model integration, inferencing, and LLM APIs (e.g., OpenAI). Familiarity with RAG, vector embeddings, and prompt-based workflows. Proficient with Azure SQL and Cosmos DB (NoSQL). Experience with vector databases (e.g., FAISS, Pinecone, Azure Cognitive Search). Proficiency in containerization using Docker, and deployment on Azure Cloud. Experience with data orchestration tools like Apache Airflow. Comfortable working with Git, CI/CD pipelines, and observability tools. Strong debugging, testing (pytest/unittest), and optimization skills. Good to Have: Experience with LangChain, transformers, or LLM fine-tuning. Exposure to MLOps practices and Azure ML. Hands-on experience with PySpark for data processing at scale. Contributions to open-source projects or AI toolkits. Background working in startup-like environments or cross-functional product teams. FieldAssist on the Web: Website: https://www.fieldassist.com/people-philosophy-culture/ Culture Book: https://www.fieldassist.com/fa-culture-book CEO's Message: https://www.youtube.com/watch?v=bl_tM5E5hcw LinkedIn: https://www.linkedin.com/company/fieldassist/

Posted 2 days ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description Role Overview : Omniful is looking for a skilled and motivated Technical Lead Golang Developer to join our on-site team in Gurugram. In this leadership role, you will be responsible for designing and developing efficient, scalable, and high-performing software solutions using Golang, guiding a team of engineers, and playing a key role in architecture decisions. You will also actively contribute to the entire software development lifecycle, from concept to deployment and : Lead the design, development, testing, and deployment of backend services and APIs using Golang. Drive architectural decisions and system design for distributed systems and microservices. Mentor and guide junior developers on best practices, code quality, and development standards. Collaborate with product managers, frontend developers, and QA to deliver robust and scalable solutions. Write clean, maintainable, and well-documented code. Troubleshoot and resolve complex technical issues and bugs. Conduct code reviews and ensure adherence to development and security standards. Core Development Proficiency in Golang with hands-on experience in building web services and backend systems Solid understanding of data structures, algorithms, and design patterns Experience with concurrency models and performance optimization in Golang APIs & Web Services Strong experience in building and consuming RESTful APIs, gRPC, and GraphQL (preferred) Experience in API versioning, documentation (e.g., Swagger/OpenAPI), Systems & Architecture Deep understanding of microservices architecture Experience with message queues (e.g., Kafka, RabbitMQ, NATS) and event-driven architecture Proficiency in containerization and orchestration tools Docker, Kubernetes & CI/CD Unit testing, integration testing, and test automation frameworks in Golang Familiarity with CI/CD pipelines using Jenkins, GitLab CI, GitHub Actions, or similar Databases & Caching Experience with both SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Redis) databases Understanding of data modeling, indexing, and query Security & DevOps Awareness Basic knowledge of authentication, authorization, and secure coding practices Exposure to cloud platforms like AWS, GCP, or Azure is a plus : Bachelors or Masters degree in Computer Science, Software Engineering, or related field Minimum 4+ years of industry experience in backend development (preferably with golang) (ref:hirist.tech)

Posted 2 days ago

Apply

58.0 years

0 Lacs

Delhi, India

On-site

Job Summary We are looking for a skilled Data Modeler / Architect with 58 years of experience in designing, implementing, and optimizing robust data architectures in the financial payments industry. The ideal candidate will have deep expertise in SQL, data modeling, ETL/ELT pipeline development, and cloud-based data platforms such as Databricks or Snowflake. You will play a key role in designing scalable data models, orchestrating reliable data workflows, and ensuring the integrity and performance of mission-critical financial datasets. This is a highly collaborative role interfacing with engineering, analytics, product, and compliance teams. Key Responsibilities Design, implement, and maintain logical and physical data models to support transactional, analytical, and reporting systems. Develop and manage scalable ETL/ELT pipelines for processing large volumes of financial transaction data. Tune and optimize SQL queries, stored procedures, and data transformations for maximum performance. Build and manage data orchestration workflows using tools like Airflow, Dagster, or Luigi. Architect data lakes and warehouses using platforms like Databricks, Snowflake, BigQuery, or Redshift. Enforce and uphold data governance, security, and compliance standards (e.g., PCI-DSS, GDPR). Collaborate closely with data engineers, analysts, and business stakeholders to understand data needs and deliver solutions. Conduct data profiling, validation, and quality assurance to ensure clean and consistent data. Maintain clear and comprehensive documentation for data models, pipelines, and architecture. Required Skills & Qualifications 58 years of experience as a Data Modeler, Data Architect, or Senior Data Engineer in the financial/payments domain. Advanced SQL expertise, including query tuning, indexing, and performance optimization. Proficiency in developing ETL/ELT workflows using tools such as Spark, dbt, Talend, or Informatica. Experience with data orchestration frameworks: Airflow, Dagster, Luigi, etc. Strong hands-on experience with cloud-based data platforms like Databricks, Snowflake, or equivalents. Deep understanding of data warehousing principles: star/snowflake schema, slowly changing dimensions, etc. Familiarity with financial data structures, such as payment transactions, reconciliation, fraud patterns, and audit trails. Working knowledge of cloud services (AWS, GCP, or Azure) and data security best practices. Strong analytical thinking and problem-solving capabilities in high-scale environments. Preferred Qualifications Experience with real-time data pipelines (e.g., Kafka, Spark Streaming). Exposure to data mesh or data fabric architecture paradigms. Certifications in Snowflake, Databricks, or relevant cloud platforms. Knowledge of Python or Scala for data engineering tasks (ref:hirist.tech)

Posted 2 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in. Job Description REQUIREMENTS: Total Experience 5+years. Strong working experience in backend development with Java and Spring Boot. Hands-on experience with RESTful APIs, JMS, JPA, Spring MVC, Hibernate. Strong understanding of messaging systems (Kafka, SQS) and caching technologies (Redis). Experience with SQL (Aurora MySQL) and NoSQL databases (Cassandra, DynamoDB, Elasticsearch). Proficient with CI/CD pipelines, Java build tools, and modern DevOps practices. Exposure to AWS services like EC2, S3, RDS, DynamoDB, EMR. Familiarity with Kubernetes-based orchestration and event-driven architecture. Experience working in Agile environments with minimal supervision. Experience with observability tools and performance tuning. Understanding of orchestration patterns and microservice architecture. Strong communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code Understanding functional requirements thoroughly and analyzing the clients needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving complex bugs Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Posted 2 days ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Name: Infrastructure Security Engineer Location- Onsite- Ahmedabad Job Type- Full Time Position Overview We are seeking an experienced Infrastructure Security Engineer to join our cybersecurity team and play a critical role in protecting our organization's digital infrastructure. This position requires a versatile security professional who can operate across multiple domains including cloud security, vulnerability management/patch management, endpoint protection, and security operations. Key Responsibilities AWS Cloud Security Design, implement, and maintain security controls across AWS environments including IAM policies, security groups, NACLs, and VPC configurations Configure and manage AWS security services such as CloudTrail, GuardDuty, Security Hub, Config, and Inspector Implement Infrastructure as Code (IaC) security best practices using CloudFormation, Terraform, or CDK Conduct regular security assessments of cloud architectures and recommend improvements Manage AWS compliance frameworks and ensure adherence to industry standards (SOC 2, ISO 27001, etc.) Vulnerability Management Lead enterprise-wide vulnerability assessment programs using tools such as Nessus Develop and maintain vulnerability and patch management policies, procedures, and SLAs, regular reporting Coordinate with IT and development teams to prioritize and remediate security vulnerabilities Generate executive-level reports on vulnerability metrics and risk exposure Conduct regular penetration testing and security assessments of applications and infrastructure Patch Management Design and implement automated patch management strategies across Windows, Linux, and cloud environments Coordinate with system administrators to schedule and deploy critical security patches Maintain patch testing procedures to minimize business disruption Monitor patch compliance across the enterprise and report on patch deployment status Develop rollback procedures and incident response plans for patch-related issues Endpoint Security Deploy and manage endpoint detection and response (EDR) solutions such as CrowdStrike Configure and tune endpoint security policies including antivirus, application control, and device encryption Investigate and respond to endpoint security incidents and malware infections Implement mobile device management (MDM) and bring-your-own-device (BYOD) security policies Conduct forensic analysis of compromised endpoints when required Required Qualifications Education & Experience Bachelor's degree in computer science, Information Security, or related field Minimum 5+ years of hands-on experience in information security roles 3+ years of experience with AWS cloud security architecture and services Technical Skills Cloud Security: Deep expertise in AWS security services, IAM, VPC security, and cloud compliance frameworks Vulnerability Management: Proficiency with vulnerability scanners (Qualys, Nessus, Rapid7) and risk assessment methodologies Patch Management: Experience with automated patching tools (WSUS, Red Hat Satellite, AWS Systems Manager) Endpoint Security: Hands-on experience with EDR/XDR platforms and endpoint management tools SIEM/SOAR: Advanced skills in log analysis, correlation rule development, and security orchestration Operating Systems: Strong knowledge of Windows and Linux security hardening and administration Security Certifications (Preferred) AWS Certified Security - Specialty CISSP (Certified Information Systems Security Professional) GCIH (GIAC Certified Incident Handler) CEH (Certified Ethical Hacker) Key Competencies Strong analytical and problem-solving skills with attention to detail Excellent communication skills and ability to explain complex security concepts to technical and non-technical stakeholders Project management capabilities with experience leading cross-functional security initiatives Ability to work in fast-paced environments and manage multiple priorities Strong understanding of regulatory compliance requirements (PCI-DSS, HIPAA, SOX, GDPR) Experience with risk assessment frameworks and security governance Reporting Structure This position reports to the Engineering Manager Cyber Security and collaborates closely with IT Operations, Development Teams.

Posted 2 days ago

Apply

4.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Position Overview We are seeking an experienced Python Developer with 4+ years of professional experience to join our dynamic development team. The ideal candidate will have strong expertise in backend development, API design, and database management, with excellent communication skills and a collaborative mindset. Key Responsibilities Backend Development : Design, develop, and maintain robust backend systems using Python. Write clean, efficient, and maintainable code following best practices. Optimize application performance and ensure scalability. Implement security best practices and maintain code quality standards. API Development Design and develop RESTful APIs and web services. Create comprehensive API documentation. Ensure API security, versioning, and proper error handling. Integrate third-party APIs and services. Database Management And Design Design and implement efficient database schemas. Optimize database queries and performance. Manage data migration and database versioning. Ensure data integrity and implement backup strategies. Communication And Collaboration Collaborate effectively with cross-functional teams, including frontend developers, designers, and product managers. Participate in code reviews and provide constructive feedback. Communicate technical concepts clearly to both technical and non-technical stakeholders. Document technical specifications and system architecture. Team Coordination Mentor junior developers and provide technical guidance. Participate in agile development processes and sprint planning. Coordinate with team members on project deliverables and timelines. Contribute to technical decision-making and architecture discussions. Required Skills And Experience Technical Skills : 4+ years of professional Python development experience. Django : Extensive experience with Django framework for web development. FastAPI : Proficiency in building modern, fast APIs with FastAPI. PostgreSQL : Strong knowledge of PostgreSQL database management. Research & Development : Ability to explore new technologies and implement innovative solutions. Frontend Technologies : Working knowledge of JavaScript, HTML, and CSS. Soft Skills Excellent verbal and written communication skills. Strong problem-solving and analytical abilities. Ability to work independently and as part of a team. Attention to detail and commitment to quality. Adaptability and willingness to learn new technologies. Good To Have Cloud Services : Experience with AWS (Amazon Web Services) or GCP (Google Cloud Platform). Knowledge of cloud deployment, scaling, and monitoring. Understanding of serverless architecture and microservices. Additional Technical Skills NoSQL Databases : Experience with MongoDB, Redis, or other NoSQL solutions. Docker : Containerization and orchestration experience. Experience with CI/CD pipelines. Knowledge of testing frameworks and test-driven development. Understanding of DevOps practices. (ref:hirist.tech)

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies