Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: Java + AWS Devops Location: Any PAN India Location - Hybrid working Model Experience: 6+ years Key Focus: Java 11, Java 12, Microservices, Event-Driven Architecture, AWS, Kubernetes Job Summary: We are looking for a highly skilled Senior Backend Engineer to design, build, and deploy scalable, cloud-native microservices using Java 11, Java 21, Spring Boot, and AWS. The ideal candidate will have strong expertise in event-driven architecture, infrastructure-as-code (Terraform), and CI/CD automation while ensuring high code quality through rigorous testing and best practices. Key Responsibilities: ✅ Design & Development: Architect, develop, and maintain highly scalable microservices using Java 11, Java 21, and Spring Boot. Implement event-driven systems using AWS SNS, SQS, and Lambda. Ensure clean, modular, and testable code with proper design patterns and architectural principles. ✅ Testing & Quality: Promote test automation (unit, integration, contract, and E2E) using JUnit 5, Mockito, WireMock. Follow shift-left testing and CI/CD best practices to ensure reliability. ✅ Cloud & DevOps: Deploy applications using Docker, Kubernetes, and Helm on AWS. Manage Infrastructure as Code (IaC) with Terraform. Monitor systems using Grafana, Prometheus, Kibana, and Sensu. ✅ Database & Performance: Work with PostgreSQL, DynamoDB, MongoDB, Redis, and Elasticsearch for optimized data storage and retrieval. Ensure high availability, fault tolerance, and performance tuning. ✅ Agile & Collaboration: Work in Scrum with pair programming, peer reviews, and iterative demos. Take ownership of backend features from design to production deployment. Must-Have Qualifications: 6+ years of hands-on JVM backend development (Java 11 and Java 21). Expertise in Spring Boot, Spring Cloud, and Hibernate. Strong experience with AWS (SNS, SQS, Lambda, S3, CloudFront) + Terraform (IaC). Microservices & Event-Driven Architecture design and implementation. Test automation (JUnit 5, Mockito, WireMock) and CI/CD pipelines (Jenkins, Kubernetes). Database proficiency: PostgreSQL, DynamoDB, MongoDB, Redis. Containerization & Orchestration: Docker, Kubernetes, Helm. Monitoring & Logging: Grafana, Prometheus, Kibana. Fluent English & strong communication skills. Show more Show less
Posted 1 day ago
2.0 years
4 - 7 Lacs
Hyderābād
On-site
Working in Application Support means you'll use both creative and critical thinking skills to maintain application systems that are crucial to the daily operations of the firm. As an Application Support at JPMorgan Chase within the Employee Platform, you will use both creative and critical thinking skills to maintain application systems that are crucial to the daily operations of the firm. You'll work collaboratively in teams on a wide range of projects based on your primary area of focus: design or programming. While learning to fix application and data issues as they arise, you'll also gain exposure to software development, testing, deployment, maintenance, and improvement, in addition to production lifecycle methodologies and risk guidelines. Finally, you'll have the opportunity to develop professionally —and to grow your career in any direction you choose. Job responsibilities Participates in triaging, examining, diagnosing, and resolving incidents and work with others to solve problems at their root. Participate in weekend support rota to ensure adequate business support coverage during core hours and weekend (rota basis) as part of a global team. Assist in the monitoring of production environments for anomalies and address issues utilizing standard observability tools. Identify issues for escalation and communication and provide solutions to the business and technology stakeholders. Participates in root cause calls and drives actions to resolution with a keen focus on preventing incident. Recognizes the manual activity within your role and proactively works towards eliminating it through either system engineering or updating application code. Required qualifications, capabilities, and skills. Formal training or certification on Application Support concepts and 2+ years of experience or equivalent expertise troubleshooting, resolving, and maintaining information technology services. Experience in observability and monitoring tools and techniques. Experience with one or more general purpose programming languages (Python or C#) and/or automation scripting (PowerShell Script) Experience in observability and monitoring tools and techniques. Familiar with tools such as Splunk, ServiceNow, Dynatrace, etc. Experience in CI/CD tools like Jenkins, Bitbucket, GitLab, Terraform Eagerness to participate in learning opportunities to enhance one’s effectiveness in executing day-to-day project activities. Preferred qualifications, capabilities, and skills Experience and understanding of Genetec Security Desk Understanding of cloud infrastructure
Posted 1 day ago
5.0 years
0 Lacs
Gurgaon
On-site
Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About the role We are looking for a Senior Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Sr. Data Engineer who has experience architecting and implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by modeling and transforming data to achieve actionable business outcomes. The Sr. Data Engineer will create, troubleshoot and support ETL pipelines and the cloud infrastructure involved in the process, will be able to support the visualizations team. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals. Demonstrate deep technical and domain knowledge of relational and non-relation databases, Data Warehouses, Data lakes among other structured and unstructured storage options. Determine solutions that are best suited to develop a pipeline for a particular data source. Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development. Efficient in ETL/ELT development using Azure cloud services and Snowflake, Testing and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics delivery. Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders. Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability). Stay current with and adopt new tools and applications to ensure high quality and efficient solutions. Build cross-platform data strategy to aggregate multiple sources and process development datasets. Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation. Job Requirements Bachelor’s Degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred. 5+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment. 5+ years of experience with setting up and operating data pipelines using Python or SQL 5+ years of advanced SQL Programming: PL/SQL, T-SQL 5+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization. Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads. 5+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data. 5+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions. 5+ years of experience in defining and enabling data quality standards for auditing, and monitoring. Strong analytical abilities and a strong intellectual curiosity In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design. Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration and teamwork skills & excellent written and verbal communications skills. Self-starter and motivated with ability to work in a fast-paced development environment. Agile experience highly desirable. Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools. Knowledge Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management). Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques. Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks. Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools. Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance). Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting. ADF, Databricks and Azure certification is a plus. Technologies we use: Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake #LI-DS1
Posted 1 day ago
3.0 years
0 Lacs
Gurgaon
On-site
Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company. A leader in the convenience store and fuel space with over 17,000 stores in 31 countries, serving more than 6 million customers each day It is an exciting time to be a part of the growing Data Engineering team at Circle K. We are driving a well-supported cloud-first strategy to unlock the power of data across the company and help teams to discover, value and act on insights from data across the globe. With our strong data pipeline, this position will play a key role partnering with our Technical Development stakeholders to enable analytics for long term success. About the role We are looking for a Data Engineer with a collaborative, “can-do” attitude who is committed & strives with determination and motivation to make their team successful. A Data Engineer who has experience implementing technical solutions as part of a greater data transformation strategy. This role is responsible for hands on sourcing, manipulation, and delivery of data from enterprise business systems to data lake and data warehouse. This role will help drive Circle K’s next phase in the digital journey by transforming data to achieve actionable business outcomes. Roles and Responsibilities Collaborate with business stakeholders and other technical team members to acquire and migrate data sources that are most relevant to business needs and goals Demonstrate technical and domain knowledge of relational and non-relational databases, Data Warehouses, Data lakes among other structured and unstructured storage options Determine solutions that are best suited to develop a pipeline for a particular data source Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development Efficient in ELT/ETL development using Azure cloud services and Snowflake, including Testing and operational support (RCA, Monitoring, Maintenance) Work with modern data platforms including Snowflake to develop, test, and operationalize data pipelines for scalable analytics deliver Provide clear documentation for delivered solutions and processes, integrating documentation with the appropriate corporate stakeholders Identify and implement internal process improvements for data management (automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability) Stay current with and adopt new tools and applications to ensure high quality and efficient solutions Build cross-platform data strategy to aggregate multiple sources and process development datasets Proactive in stakeholder communication, mentor/guide junior resources by doing regular KT/reverse KT and help them in identifying production bugs/issues if needed and provide resolution recommendation Job Requirements Bachelor’s degree in Computer Engineering, Computer Science or related discipline, Master’s Degree preferred 3+ years of ETL design, development, and performance tuning using ETL tools such as SSIS/ADF in a multi-dimensional Data Warehousing environment 3+ years of experience with setting up and operating data pipelines using Python or SQL 3+ years of advanced SQL Programming: PL/SQL, T-SQL 3+ years of experience working with Snowflake, including Snowflake SQL, data modeling, and performance optimization Strong hands-on experience with cloud data platforms such as Azure Synapse and Snowflake for building data pipelines and analytics workloads 3+ years of strong and extensive hands-on experience in Azure, preferably data heavy / analytics applications leveraging relational and NoSQL databases, Data Warehouse and Big Data 3+ years of experience with Azure Data Factory, Azure Synapse Analytics, Azure Analysis Services, Azure Databricks, Blob Storage, Databricks/Spark, Azure SQL DW/Synapse, and Azure functions 3+ years of experience in defining and enabling data quality standards for auditing, and monitoring Strong analytical abilities and a strong intellectual curiosity. In-depth knowledge of relational database design, data warehousing and dimensional data modeling concepts Understanding of REST and good API design Experience working with Apache Iceberg, Delta tables and distributed computing frameworks Strong collaboration, teamwork skills, excellent written and verbal communications skills Self-starter and motivated with ability to work in a fast-paced development environment Agile experience highly desirable Proficiency in the development environment, including IDE, database server, GIT, Continuous Integration, unit-testing tool, and defect management tools Preferred Skills Strong Knowledge of Data Engineering concepts (Data pipelines creation, Data Warehousing, Data Marts/Cubes, Data Reconciliation and Audit, Data Management) Strong working knowledge of Snowflake, including warehouse management, Snowflake SQL, and data sharing techniques Experience building pipelines that source from or deliver data into Snowflake in combination with tools like ADF and Databricks Working Knowledge of Dev-Ops processes (CI/CD), Git/Jenkins version control tool, Master Data Management (MDM) and Data Quality tools Strong Experience in ETL/ELT development, QA and operation/support process (RCA of production issues, Code/Data Fix Strategy, Monitoring and maintenance) Hands on experience in Databases like (Azure SQL DB, MySQL/, Cosmos DB etc.), File system (Blob Storage), Python/Unix shell Scripting ADF, Databricks and Azure certification is a plus Technologies we use : Databricks, Azure SQL DW/Synapse, Azure Tabular, Azure Data Factory, Azure Functions, Azure Containers, Docker, DevOps, Python, PySpark, Scripting (Powershell, Bash), Git, Terraform, Power BI, Snowflake #LI-DS1
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: DevOps Engineer Location: Chennai (full-time, at office) Years of Experience: 4-8 years Job Summary: We are seeking a skilled DevOps engineer with knowledge of automation, continuous integration, deployment and delivery processes. The ideal candidate should be a self-starter with hands-on production experience, and having excellent communication skills. Key Responsibilities: ● Infrastructure as Code: first principles on cloud infrastructure, system design, and application deployments. ● CI/CD pipelines: to design, implement, troubleshoot, and maintain CI/CD pipelines. ● System administration: skills with systems, networking, and security fundamentals. ● Proficiency in coding: with hands-on experience in programming languages, and ability to write, review, and troubleshoot code for infrastructure. ● Monitoring and observability: to track performance and health of services and configure alerts with interactive dashboards for reporting. ● Security best practices and familiarity with audits, compliance, and regulation. ● Communication skills: to clearly and effectively discuss and collaborate across cross-functional teams. ● Documentation: using Agile methodologies, Jira, and Git. Qualification: ● Education: Bachelor's degree in CS, IT, or a related field (or equivalent work experience). ● Skills*: Infrastructure: Docker, Kubernetes, ArgoCD, Helm, Chronos, GitOps. Automation: Ansible, Puppet, Chef, Salt, Terraform, OpenTofu. CI/CD: Jenkins, CircleCI, ArgoCD, GitLab, GitHub Actions. Cloud platforms: Amazon Web Services (AWS), Azure, Google Cloud. Operating Systems: Windows, *nix distributions (Fedora, Red Hat, Ubuntu, Debian), *BSD, Mac OS X. Monitoring and observability: Prometheus, Grafana, Elasticsearch, Nagios. Databases: MySQL, PostgreSQL, MongoDB, Qdrant, Redis. Programming Languages: Python, Bash, JavaScript, TypeScript, Golang. Documentation: Atlassian Jira, Confluence, Git. (* Proficient in one or more tools in each category.) Additional Requirements: • Include GitHub or GitLab profile link in the resume. • Only candidates with a Computer Science or Information Technology engineering background will be considered. • Primary Operating System should be Linux (Ubuntu or any distribution) or macOS. Show more Show less
Posted 1 day ago
8.0 years
4 - 8 Lacs
Gurgaon
On-site
- 8+ years’ experience in Java/J2EE and 2+ years on any Cloud Platform; Bachelor’s in IT, CS, Math, Physics, or related field. - Strong skills in Java, J2EE, REST, SOAP, Web Services, and deploying on servers like WebLogic, WebSphere, Tomcat, JBoss. - Proficient in UI development using JavaScript/TypeScript frameworks such as Angular and React. - Experienced in building scalable business software with core AWS services and engaging with customers on best practices and project management. The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. 10034 Key job responsibilities As an experienced technology professional, you will be responsible for: - Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs - Providing technical guidance and troubleshooting support throughout project delivery - Collaborating with stakeholders to gather requirements and propose effective migration strategies - Acting as a trusted advisor to customers on industry trends and emerging technologies - Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About the team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. AWS experience preferred, with proficiency in EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation, and AWS Professional certifications (e.g., Solutions Architect, DevOps Engineer). Strong scripting and automation skills (Terraform, Python) and knowledge of security/compliance standards (HIPAA, GDPR). Strong communication skills, able to explain technical concepts to both technical and non-technical audiences. Experience in designing, developing, and deploying scalable business software using AWS services like Lambda, Elastic Beanstalk, and Kubernetes. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 day ago
4.0 - 6.0 years
0 Lacs
Gurgaon
On-site
Locations: Bengaluru | Gurgaon Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do As a part of BCG's X team, you will work closely with consulting teams on a diverse range of advanced analytics and engineering topics. You will have the opportunity to leverage analytical methodologies to deliver value to BCG's Consulting (case) teams and Practice Areas (domain) through providing analytical and engineering subject matter expertise.As a Data Engineer, you will play a crucial role in designing, developing, and maintaining data pipelines, systems, and solutions that empower our clients to make informed business decisions. You will collaborate closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to deliver high-quality data solutions that meet our clients' needs. YOU'RE GOOD AT Delivering original analysis and insights to case teams, typically owning all or part of an analytics module whilst integrating with a case team. Design, develop, and maintain efficient and robust data pipelines for extracting, transforming, and loading data from various sources to data warehouses, data lakes, and other storage solutions. Building data-intensive solutions that are highly available, scalable, reliable, secure, and cost-effective using programming languages like Python and PySpark. Deep knowledge of Big Data querying and analysis tools, such as PySpark, Hive, Snowflake and Databricks. Broad expertise in at least one Cloud platform like AWS/GCP/Azure.* Working knowledge of automation and deployment tools such as Airflow, Jenkins, GitHub Actions, etc., as well as infrastructure-as-code technologies like Terraform and CloudFormation. Good understanding of DevOps, CI/CD pipelines, orchestration, and containerization tools like Docker and Kubernetes. Basic understanding on Machine Learning methodologies and pipelines. Communicating analytical insights through sophisticated synthesis and packaging of results (including PPT slides and charts) with consultants, collecting, synthesizing, analyzing case team learning & inputs into new best practices and methodologies. Communication Skills: Strong communication skills, enabling effective collaboration with both technical and non-technical team members. Thinking Analytically You should be strong in analytical solutioning with hands on experience in advanced analytics delivery, through the entire life cycle of analytics. Strong analytics skills with the ability to develop and codify knowledge and provide analytical advice where required. What You'll Bring Bachelor's / Master's degree in computer science engineering/technology At least 4-6 years within relevant domain of Data Engineering across industries and work experience providing analytics solutions in a commercial setting. Consulting experience will be considered a plus. Proficient understanding of distributed computing principles including management of Spark clusters, with all included services - various implementations of Spark preferred. Basic hands-on experience with Data engineering tasks like productizing data pipelines, building CI/CD pipeline, code orchestration using tools like Airflow, DevOps etc.Good to have: - Software engineering concepts and best practices, like API design and development, testing frameworks, packaging etc. Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge on web development technologies. Understanding of different stages of machine learning system design and development Who You'll Work With You will work with the case team and/or client technical POCs and border X team. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.
Posted 1 day ago
0 years
0 Lacs
India
Remote
About Us Our leading SaaS-based Global Growth Platform™ enables clients to expand into over 180 countries quickly and efficiently, without the complexities of establishing local entities. At G-P, we’re dedicated to breaking down barriers to global business and creating opportunities for everyone, everywhere. Our diverse, remote-first teams are essential to our success. We empower our Dream Team members with flexibility and resources, fostering an environment where innovation thrives and every contribution is valued and celebrated. The work you do here will positively impact lives around the world. We stand by our promise: Opportunity Made Possible. In addition to competitive compensation and benefits, we invite you to join us in expanding your skills and helping to reshape the future of work. At G-P, we assist organizations in building exceptional global teams in days, not months—streamlining the hiring, onboarding, and management process to unlock growth potential for all. About The Role As a Principal AI Engineer, you will design, develop, and deploy AI solutions that address complex business challenges. This role requires advanced expertise in artificial intelligence, including machine learning and natural language processing, and the ability to implement these technologies in production-grade systems. Key Responsibilities Develop innovative, scalable AI solutions for real business problems. Drive the full lifecycle of projects from conception to deployment, ensuring alignment with business objectives. Own highly open-ended projects end-to-end, from the analysis of business requirements to the deployment of solutions. Expect to dedicate about 20% of your time to understanding problems and collaborating with stakeholders. Manage complex data sets, design efficient data processing pipelines, and work on robust models. Expect to spend approximately 80% of your time on data and ML engineering tasks related to developing AI systems. Work closely with other AI engineers, product managers, and stakeholders to ensure that AI solutions meet business needs and enhance user satisfaction. Write clear, concise, and comprehensive technical documentation for all projects and systems developed. Stay updated on the latest developments in the field. Explore and prototype new technologies and approaches to address specific challenges faced by the business. Develop and maintain high-quality machine learning services. Prioritize robust engineering practices and user-centric development. Able to work independently and influence at different levels of the organization. Highly motivated and result driven Required Skills And Qualifications Master’s degree in Computer Science, Machine Learning, Statistics, Engineering, Mathematics, or a related field Deep understanding and practical experience in machine learning and natural language processing especially LLMs Strong foundational knowledge in statistical modeling, probability, and linear algebra Extensive practical experience with curating datasets, training models, analyzing post-deployment data, and developing robust metrics to ensure model reliability Experience developing and maintaining machine learning services for real-world applications at scale Strong Python programming skills High standards for code craftsmanship (maintainable, testable, production-ready code) Proficiency with Docker Knowledge of system design and cloud infrastructure for secure and scalable AI solutions. Proficiency with AWS Proven track record in driving AI projects with strong technical leadership. Excellent communication skills when engaging with both technical and non-technical stakeholders Nice To Have Qualifications Experience with natural language processing for legal applications Proficiency with Terraform React and Node.js experience If you're ready to make an impact in a high-paced startup environment, with a team that embraces innovation and hard work, G-P is the place for you. Be ready to hustle and put in the extra hours when needed to drive our mission forward. We will consider for employment all qualified applicants who meet the inherent requirements for the position. Please note that background checks are required, and this may include criminal record checks. G-P. Global Made Possible. G-P is a proud Equal Opportunity Employer, and we are committed to building and maintaining a diverse, equitable and inclusive culture that celebrates authenticity. We prohibit discrimination and harassment against employees or applicants on the basis of race, color, creed, religion, national origin, ancestry, citizenship status, age, sex or gender (including pregnancy, childbirth, and pregnancy-related conditions), gender identity or expression (including transgender status), sexual orientation, marital status, military service and veteran status, physical or mental disability, genetic information, or any other legally protected status. G-P also is committed to providing reasonable accommodations to individuals with disabilities. If you need an accommodation due to a disability during the interview process, please contact us at careers@g-p.com. Show more Show less
Posted 1 day ago
8.0 years
20 - 28 Lacs
Gurgaon
On-site
Job Title: DevOps Engineer Location: Gurgaon (Work Form Office) Job Type: Full Time Role Experience Level: 8-12 Years Job Summary: We are looking for a skilled and proactive DevOps Engineer to join our technology team. The ideal candidate will be responsible for managing the infrastructure, automating workflows, and ensuring smooth deployment and integration of code across various environments. You will work closely with developers, QA teams, and system administrators to improve CI/CD pipelines, scalability, reliability, and security. Key Responsibilities: Design, build, and maintain efficient CI/CD pipelines (e.g., Jenkins, GitLab CI, GitHub Actions). Automate provisioning, deployment, monitoring, and scaling of infrastructure. Manage and monitor cloud services (AWS, Azure, GCP) and on-premises environments. Configure and manage container orchestration (Docker, Kubernetes). Implement infrastructure as code using tools like Terraform, CloudFormation, or Ansible. Ensure high availability, performance, and security of production systems. Monitor logs, metrics, and application performance; implement alerting and incident response. Collaborate with development and QA teams to streamline release processes. Required Skills and Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. Proven experience in a DevOps or Systems Engineering role. Proficiency with Linux-based infrastructure. Hands-on experience with at least one major cloud provider (AWS, Azure, or GCP). Strong scripting skills (Bash, Python, PowerShell, etc.). Experience with configuration management and IaC tools (e.g., Terraform, Ansible). Familiarity with containerization and orchestration tools (Docker, Kubernetes). Understanding of networking, security, DNS, load balancing, and firewalls. Preferred Qualifications: Certification in AWS, Azure, or GCP. Experience with monitoring tools like Prometheus, Grafana, ELK Stack, Datadog, etc. Exposure to Agile/Scrum methodologies. Knowledge of security best practices in DevOps environments. Job Type: Full-time Pay: ₹2,000,000.00 - ₹2,800,000.00 per year Work Location: In person Speak with the employer +91 9319571799
Posted 1 day ago
3.0 - 7.0 years
5 - 10 Lacs
Pune
Work from Office
This role is for Engineer who is responsible for design, development, and unit testing software applications. The candidate is expected to ensure good quality, maintainable, scalable, and high performing software applications getting delivered to users in an Agile development environment. Candidate / Applicant should be coming from a strong technological background. The candidate should have goo working experience in Python and Spark technology. Should be hands on and be able to work independently requiring minimal technical/tool guidance. Should be able to technically guide and mentor junior resources in the team. As a developer you will bring extensive design and development skills to enforce the group of developers within the team. The candidate will extensively make use and apply Continuous Integration tools and practices in the context of Deutsche Banks digitalization journey. Your key responsibilities Design and discuss your own solution for addressing user stories and tasks. Develop and unit-test, Integrate, deploy, maintain, and improve software. Perform peer code review. Actively participate into the sprint activities and ceremonies e.g., daily stand-up/scrum meeting, Sprint planning, retrospectives, etc. Apply continuous integration best practices in general (SCM, build automation, unit testing, dependency management) Collaborate with other team members to achieve the Sprint objectives. Report progress/update Agile team management tools (JIRA/Confluence) Manage individual task priorities and deliverables. Responsible for quality of solutions candidate / applicant provides. Contribute to planning and continuous improvement activities & support PO, ITAO, Developers and Scrum Master. Your skills and experience Engineer with Good development experience in Big Data platform for at least 5 years. Hands own experience in Spark (Hive, Impala). Hands own experience in Python Programming language. Preferably, experience in BigQuery , Dataproc , Composer , Terraform , GKE , Cloud SQL and Cloud functions. Experience in set-up, maintenance, and ongoing development of continuous build/ integration infrastructure as a part of DevOps. Create and maintain fully automated CI build processes and write build and deployment scripts. Has experience with development platforms: OpenShift/ Kubernetes/Docker configuration and deployment with DevOps tools e.g., GIT, TeamCity, Maven, SONAR Good Knowledge about the core SDLC processes and tools such as HP ALM, Jira, Service Now. Strong analytical skills. Proficient communication skills. Fluent in English (written/verbal). Ability to work in virtual teams and in matrixed organizations. Excellent team player. Open minded and willing to learn business and technology. Keeps pace with technical innovation. Understands the relevant business area. Ability to share information, transfer knowledge to expertise the team members.
Posted 1 day ago
5.0 years
5 - 8 Lacs
Gurgaon
On-site
About the team: The cloud platform teams design, implement, support and improve the cloud technology stack to ensure systems security, reliability & availability. We design, deploy & maintain advanced log analysis & monitoring systems; We lead automated, agile based release management to our 24x7 online application stack; We maintain and develop our ci/cd pipelines & deployment automation tools for our release processes. We implement IaC using Terraform as well as managing more IaaS based infrastructure; We manage and support all PaaS platforms in use in our business! Who we are looking for: We are looking for a highly competent, reliable, self-starting IT generalist with experience in a Windows Server administration or SRE role with web application support. You must have strong infrastructure knowledge with great analysis & problem solving skills - perhaps youve also done some scripting or automation work in a previous role or in a part time capacity? Come talk to us! Responsibilities - Resolve complex technical issues in infrastructure, applications, platforms & backoffice systems Manage and monitor Azure cloud resources, performance, security, and costs using various tools and frameworks Provide third line of support for issues from the front-line incident managers Deploy software releases to our Azure based systems using a squad methodology Be able to clearly think through, communicate & participate in the wider ITS sessions Be part of an on-call rota as needed We are looking for someone who has: 5 years experience supporting Azure cloud infrastructure. 3+ years experience of supporting web application technologies At least 2 years experience using Octopus Deploy Excellent knowledge of Azure technologies and the Azure stack This role requires knowledge of all of the following: TCP/IP, DNS, DHCP, SSL, IIS, Windows Server OS High proficiency in PowerShell & Bash Strong IT admin skills, networking and troubleshooting skills. Excellent verbal & written communications skills A can-do attitude & works with minimal oversight with high standards The ability to prioritise & work in a fast paced, high volume, agile environment. Knowledge of Terraform Knowledge of Hyland Alfresco and HIDP Better if you have: Experience in automating and streamlining a software development lifecycle (SDLC), configuration management etc Experience using Google Cloud Platform Experience working in a regulated financial entity Experience in working with agile methodologies such as Scrum or Kanban Insight: Can we look at staff who have minimum 3 years Azure experience with approx. ~ 5 years experience overall , Octopus Deploy and scripting would also be a bonus.
Posted 1 day ago
4.0 years
0 Lacs
Delhi
On-site
Job requisition ID :: 80366 Date: Jun 16, 2025 Location: Delhi CEC Designation: Consultant Entity: Job Description Location Gurgaon About your role The position is for a Java Development Specialist. The role involves doing development involving core skills of Java (OOPS, Collections, Multi- Threading), SQL, Spring Core, Spring MVC, Hibernate etc.Knowledge of working in Agile Team with DevOps principles would be an additional advantage. This would also involve intensive interaction with the business and other Technology groups, and hence strong communications skills and the ability to work under pressure are absolute must.The candidate is expected to display professional ethics in his/her approach to work and exhibit a high level ownership within a demanding working environment. Essential Skills Minimum 4 years of experience with Webservices & REST APIs Minimum 2 years of experience with cloud – any one of AWS/Azure/Cloudfoundary/Heroku/GCP UML, Design patterns, data structures, clean coding Experience in CI/CD, TDD, DevOps, CI/CD tools - Jenkins/UrbanCode/SonarQube/ Bamboo AWS Lambda, Step Functions, DynamoDB, API Gateway, Cognito, S3, SNS, VPC, IAM, EC2, ECS, etc. Hands on with coding and debugging. Should be able to write high quality code optimized for performance. Good analytical & problem-solving skills and should be good with algorithms. Spring MVC, SpringBoot, Spring Batch, Spring Security. Git, Maven/Gradle. Hibernate (or JPA). Key Responsibilities Work on Java/PaaS applications. Own and deliver technically sound solutions for the ‘Integration Layer’ product. Work and develop on Java/FIL PaaS/AWS applications. Interact with senior architects and other consultants to understand and review the technical solution and direction. Communicate with business analysts to discuss various business requirements. Proactively refactor code/solution, be aggressive about tech debt identification and reduction Develop, maintain, and troubleshoot issues; and take a leading role in the ongoing support and enhancements of the applications. Help in maintaining the standards, procedures, and best practices in the team. Also help his team to follow these standards. Prioritisation of requirements in pipeline with stakeholders. Experience and Qualification: B.E./ B.Tech. or M.C.A. in Computer Science from a reputed University Total 4 to 6 years of experience with application development in Java and related frameworks Skills – nice to have: Spring batch, Spring integration. PL-SQL, Unix IaaC (Infrastructure as code) - Terraform/SAM/Cloudformation JMS, IBM MQ Layer7/Apigee Docker/Kubernetes Microsoft Teams development experience Linux basic
Posted 1 day ago
5.0 - 8.0 years
7 - 17 Lacs
Chennai
Work from Office
Responsibilities Implement and manage cloud infrastructure using Infrastructure as Code (IaC) for compute, storage, network services, and container/Kubernetes management to support high-volume, low-latency CAMS applications. Maintain deep understanding and oversight of all IaC solutions to ensure consistent, repeatable, and secure infrastructure capabilities that can scale on demand. Monitor and manage infrastructure performance to meet service level agreements (SLAs), control costs, and prioritize automation in all deployment processes. Ensure that infrastructure designs and architectures align with technical specifications and business requirements. Provide key support and contribute to the full lifecycle ownership of platform services. Adhere to DevOps principles and participate in end-to-end platform ownership, including occasional incident resolution outside normal hours as part of an on-call rota. Engage in project scoping, requirements analysis, and technical discovery to shape effective infrastructure solutions. Perform performance tuning, monitoring, and maintenance of fault-tolerant, highly available infrastructure to deliver scalable services. Maintain detailed oversight of automation processes and infrastructure security, implementing improvements as necessary. Support continuous improvement by researching alternative approaches and technologies and presenting recommendations for architectural review. Collaborate with teams to contribute to architectural design decisions. Utilize experience with CI/CD pipelines, GitOps, and Kubernetes management to streamline deployment and operations. Work Experience Over 7 years of proven hands-on technical experience. More than 5 years of experience leading and managing cloud infrastructure, including VPC, compute, storage, container services, Kubernetes, and related technologies. Strong Linux system administration skills across CentOS, Ubuntu, and GKE environments, including patching, configuration, and maintenance. Practical expertise with continuous integration tools such as Jenkins and GitLab, along with build automation and dependency management. Proven track record of delivering software releases on schedule. Committed to a collaborative working style and effective team communication, thriving in small, agile teams. Experience designing and implementing zero-downtime deployment solutions in cloud environments. Solid understanding of database and big data technologies, including both SQL and NoSQL systems. #Google Cloud Platform #Terraform #Git #GitOps #Kubernetes #Iac Please share your profiles to divyaa.m@camsonline.com
Posted 1 day ago
5.0 - 8.0 years
0 Lacs
Bhubaneshwar
On-site
Position: Senior Security Engineer (NV58FCT RM 3325) Job Description: 5–8 years of experience in security engineering, preferably with a focus on cloud-based systems. Strong understanding of cloud infrastructure (AWS/GCP/Azure), including IAM, VPC, security groups, key management, etc Hands-on experience with security tools (e.g., AWS Security Hub, Azure Defender, Prisma Cloud, CrowdStrike, Burp Suite, Nessus, or equivalent). Familiarity with containerization and orchestration security (Docker, Kubernetes). Proficient in scripting (Python, Bash, etc.) and infrastructure automation (Terraform, CloudFormation, etc.). In-depth knowledge of encryption, authentication, authorization, and secure communications. Experience interfacing with clients and translating security requirements into actionablesolutions. Preferred Qualifications: Certifications such as CISSP, CISM, CCSP, OSCP, or cloud-specific certs (e.g., AWS Security Specialty). Experience with zero trust architecture and DevSecOps practices. Knowledge of secure mobile or IoT platforms is a plus. Soft Skills: Strong communication and interpersonal skills to engage with clients and internal teams. Analytical mindset with attention to detail and a proactive attitude toward risk mitigation. Ability to prioritize and manage multiple tasks in a fast-paced environment Document architectures, processes, and procedures, ensuring clear communication across the team. ******************************************************************************************************************************************* Job Category: Digital_Cloud_Web Technologies Job Type: Full Time Job Location: BhubaneshwarNoida Experience: 5 - 8 Years Notice period: 0-30 days
Posted 1 day ago
6.0 years
2 - 6 Lacs
Noida
On-site
About Foxit Foxit is a global software company reshaping how the world interacts with documents. With over 700 million users worldwide, we offer cutting-edge PDF, collaboration, and e-signature solutions across desktop, mobile, and cloud platforms. As we expand our SaaS and cloud-native capabilities, we're seeking a technical leader who thrives in distributed environments and can bridge the gap between development and operations at global scale. Role Overview As a Senior Development Support Engineer , you will serve as a key technical liaison between Foxit’s global production environments and our China-based development teams. Your mission is to ensure seamless cross-border collaboration by investigating complex issues, facilitating secure and compliant debugging workflows, and enabling efficient delivery through modern DevOps and cloud infrastructure practices. This is a hands-on, hybrid role requiring deep expertise in application development, cloud operations, and diagnostic tooling. You'll work across production environments to maintain business continuity, support rapid issue resolution, and empower teams working under data access and sovereignty constraints. Key Responsibilities Cross-Border Development Support Investigate complex, high-priority production issues inaccessible to China-based developers. Build sanitized diagnostic packages and test environments to enable effective offshore debugging. Lead root cause analysis for customer-impacting issues across our Java and PHP-based application stack. Document recurring patterns and technical solutions to improve incident response efficiency. Partner closely with China-based developers to maintain architectural alignment and system understanding. Cloud Infrastructure & DevOps Manage containerized workloads (Docker/Kubernetes) in AWS and Azure; optimize performance and cost. Support deployment strategies (blue-green, canary, rolling) and troubleshoot CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI). Implement and manage Infrastructure as Code using Terraform (multi-cloud), with CloudFormation or ARM Templates as a plus. Support observability through tools like New Relic, CloudWatch, Azure Monitor, and log aggregation systems. Automate environment provisioning, monitoring, and diagnostics using Python, Bash, and PowerShell. Collaboration & Communication Translate production symptoms into actionable debugging tasks for teams without access to global environments. Work closely with database, QA, and SRE teams to resolve infrastructure or architectural issues. Ensure alignment with global data compliance policies (SOC2, NSD-104, GDPR) when sharing data across borders. Communicate technical issues and resolutions clearly to both technical and non-technical stakeholders. Qualifications Technical Skills Languages: Advanced in Java and PHP (Spring Boot, YII); familiarity with JavaScript a plus. Architecture: Experience designing and optimizing backend microservices and APIs. Cloud Platforms: Hands-on with AWS (EC2, Lambda, RDS) and Azure (VMs, Functions, SQL DB). Containerization: Docker & Kubernetes (EKS/AKS); Helm experience a plus. IaC & Automation: Proficient in Terraform; scripting with Python/Bash. DevOps: Familiar with modern CI/CD pipelines; automated testing (Cypress, Playwright). Databases & Messaging: MySQL, MongoDB, Redis, RabbitMQ. Professional Experience Minimum 6+ years of full-stack or backend development experience in high-concurrency systems. Strong understanding of system design, cloud infrastructure, and global software deployment practices. Experience working in global, distributed engineering teams with data privacy or access restrictions. Preferred Exposure to compliance frameworks (SOC 2, GDPR, NSD-104, ISO 27001, HIPAA). Familiarity with cloud networking, CDN configuration, and cost optimization strategies. Tools experience with Postman, REST Assured, or security testing frameworks. Language: Fluency in English; Mandarin Chinese is a strong plus. Why Foxit? Work at the intersection of development and operations on a global scale. Be a trusted technical enabler for distributed teams facing real-world constraints. Join a high-impact team modernizing cloud infrastructure for enterprise-grade document solutions. Competitive compensation, professional development programs, and a collaborative culture. #LI-Hybrid
Posted 1 day ago
3.0 - 5.0 years
5 - 8 Lacs
Noida
On-site
Position: Devops Developer (NV35FCT RM 3313) Job Description: Design, deploy, and manage cloud infrastructure using AWS (EC2, VPC, ECS, Load Balancers, Auto Scaling Groups, EBS, EFS, FSx, S3, Transit Gateway, Lambda, API Gateway, CloudFront, WAF, IAM, CloudWatch, Route 53, AWS Transfer Family, Opensearch). Drive AWS cost optimization initiatives, including resource right-sizing, reserved instance planning, and cloud usage analysis. Build and manage containerized applications using Docker and ECS. Automate infrastructure provisioning and configuration using Terraform and Ansible. Develop scripts and tools in Python and Shell to automate operational tasks. Implement and maintain CI/CD pipelines using Jenkins, GitHub Actions, and Git. Manage and troubleshoot Linux systems (RHEL, Ubuntu, Amazon Linux) and Windows environments. Work with Active Directory (AD) for user and access management, integrating with cloud infrastrucutre. Monitor system performance, availability, and security using AWS native tools and best practices. Collaborate with cross-functional teams to support development, testing, and production environments. ******************************************************************************************************************************************* Job Category: Digital_Cloud_Web Technologies Job Type: Full Time Job Location: Noida Experience: 3 - 5 years Notice period: 0-30 days
Posted 1 day ago
10.0 years
8 - 10 Lacs
Lucknow
On-site
Job Title: Linux System Engineer (Tomcat/Apache/Patch Management) Location: Lucknow Work Mode: Onsite work from office GOVT project ( Cmmi Level 3 company) Experience: 10+ years Key Responsibilities: Administer, monitor, and troubleshoot Linux servers (RHEL/CentOS/Ubuntu) in production and staging environments. Configure, deploy, and manage Apache HTTP Server and Apache Tomcat applications. Perform regular patching, upgrades, and vulnerability remediation across Linux systems to maintain security compliance. Ensure availability, reliability, and performance of all server components. Maintain server hardening and compliance based on organization and industry standards. Automate routine tasks using shell scripting (Bash, Python preferred). Monitor system health using tools like Nagios, Zabbix, or similar. Collaborate with DevOps and Development teams for deployment and release planning. Support CI/CD pipelines and infrastructure provisioning (exposure to Jenkins, Ansible, Docker, Git, etc.). Document system configurations, procedures, and policies. Required Skills & Qualifications: 8 -10 years of hands-on experience in Linux Systems Administration. Strong expertise in Apache and Tomcat setup, tuning, and management. Experience with patch management tools (e.g., YUM, APT, Satellite, WSUS). Proficient in shell scripting (Bash, Python preferred). Familiarity with DevOps tools like Jenkins, Ansible, Git, Docker, etc. Experience in infrastructure monitoring and alerting tools. Strong troubleshooting and problem-solving skills. Understanding of basic networking and firewalls. Bachelor's degree in Computer Science, Information Technology, or related field. Preferred: Exposure to cloud platforms (AWS, Azure, GCP). Certification in Red Hat (RHCE/RHCSA) or Linux Foundation. Experience with infrastructure-as-code (Terraform, CloudFormation good to have). Job Types: Full-time, Permanent Pay: ₹800,000.00 - ₹1,000,000.00 per year Schedule: Day shift Work Location: In person Speak with the employer +91 9509902875
Posted 1 day ago
3.0 years
0 Lacs
India
Remote
Ready to be pushed beyond what you think you’re capable of? At Coinbase, our mission is to increase economic freedom in the world. It’s a massive, ambitious opportunity that demands the best of us, every day, as we build the emerging onchain platform — and with it, the future global financial system. To achieve our mission, we’re seeking a very specific candidate. We want someone who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system. We want someone who is eager to leave their mark on the world, who relishes the pressure and privilege of working with high caliber colleagues, and who actively seeks feedback to keep leveling up. We want someone who will run towards, not away from, solving the company’s hardest problems. Our work culture is intense and isn’t for everyone. But if you want to build the future alongside others who excel in their disciplines and expect the same from you, there’s no better place to be. While many roles at Coinbase are remote-first, we are not remote-only. In-person participation is required throughout the year. Team and company-wide offsites are held multiple times annually to foster collaboration, connection, and alignment. Attendance is expected and fully supported. Team Coinbase is seeking a software engineer to join our India pod to drive the launch and growth of Coinbase in India. You will solve unique, large-scale, highly complex technical problems. You will help build the next generation of systems to make cryptocurrency accessible to everyone across multiple platforms (web, iOS, Android), operating real-time applications with high frequency and low latency updates, keeping the platform safe from fraud, enabling delightful experiences, and managing the most secure, containerized infrastructure running in the cloud. What you’ll be doing (i.e., job duties): Build high-performance services using Golang and gRPC, creating seamless integrations that elevate Coinbase's customer experience. Adopt, learn, and drive best practices in design techniques, coding, testing, documentation, monitoring, and alerting. Demonstrate a keen awareness of Coinbase’s platform, development practices, and various technical domains, and build upon them to efficiently deliver improvements across multiple teams. Add positive energy in every meeting and make your coworkers feel included in every interaction. Communicate across the company to both technical and non-technical leaders with ease. Deliver top-quality services in a tight timeframe by navigating seamlessly through uncertainties. Work with teams and teammates across multiple time zones. What we look for in you (i.e., job requirements): 3+ years of experience as a software engineer and 1+ years building backend services using Golang and gRPC. A self-starter capable of executing complex solutions with minimal guidance while ensuring efficiency and scalability. Proven experience integrating at least two third-party applications using Golang. Hands-on experience with AWS, Kubernetes, Terraform, Buildkite, or similar cloud infrastructure tools. Working knowledge of event-driven architectures (Kafka, MQ, etc.) and hands-on experience with SQL or NoSQL databases. Good understanding of gRPC, GraphQL, ETL pipelines, and modern development practices. Nice to haves: SaaS platform experience (Salesforce, Amazon Connect, Sprinklr). Experience with AWS, Kubernetes, Terraform, GitHub Actions, or similar tools. Familiarity with rate limiters, caching, metrics, logging, and debugging. Req ID - GCBE04IN Please be advised that each candidate may submit a maximum of four applications within any 30-day period. We encourage you to carefully evaluate how your skills and interests align with Coinbase's roles before applying. Commitment to Equal Opportunity Coinbase is committed to diversity in its workforce and is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, sex, gender expression or identity, sexual orientation or any other basis protected by applicable law. Coinbase will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law. For US applicants, you may view the Know Your Rights notice here . Additionally, Coinbase participates in the E-Verify program in certain locations, as required by law. Coinbase is also committed to providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process, please contact us at accommodations[at]coinbase.com to let us know the nature of your request and your contact information. For quick access to screen reading technology compatible with this site click here to download a free compatible screen reader (free step by step tutorial can be found here) . Global Data Privacy Notice for Job Candidates and Applicants Depending on your location, the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) may regulate the way we manage the data of job applicants. Our full notice outlining how data will be processed as part of the application procedure for applicable locations is available here. By submitting your application, you are agreeing to our use and processing of your data as required. For US applicants only, by submitting your application you are agreeing to arbitration of disputes as outlined here. Show more Show less
Posted 1 day ago
5.0 years
8 - 9 Lacs
Calcutta
On-site
Description Summary: We are looking for a DevOps Engineer to join a globally distributed Development Department following an Agile Software Development and Release Methodology. This position works with many technologies such as Azure Cloud Services , Windows Administration, Azure Networking, Azure Firewall, Microservice infrastructure , Docker. The ideal candidate will be an energetic learner and enjoy sharing knowledge within the team via training sessions or documentation creation (preferably well versed in .md and .yml files). Role: Design, develop, maintain, and support high-quality in-house software build systems for Enterprise class software Candidate to participate in SRE practice working session and adopt and implement best practices in the respective fields Candidate will develop and maintain IaaC through terraform, Powershell and linux shell scripting Candidate will be responsible for defining the networking and firewall rules for achieving the business goals Define strategy for source code controlling through GitHub and Build and deploy pipeline through GitHub action. Understanding github auth model would be a plus. Working with containerization. (eg. Docker, AKS) Working with Azure Pass (e.g. Azure App service , Azure blob, cosmos DB , Azure Functions etc.) Ensure systems can accommodate growth in our delivery needs by understanding the project requirements during the SDLC process and monitor applications for high availability Define monitoring and alerting best practices based on Site Reliability Engineering Proficient in Azure log analytics and App – insight handling through Kql queries Analyzing application and server logs for troubleshooting C# based application(s) Should be well versed about RBAC model of Azure services Manage security certificates/keystore and to track and updating certificates based on the established process Availability via email, telephone, or any device that may be assigned in order to be part of a pager duty rotation which might extend over weekend as well. Qualifications Requirements: BE, BTech or MCA as educational qualification 5+ years’ experience in DevOps/SRE concepts Experience in Agile software development process Should possess good hand -on expertise on terraform, Powershell and linux shell scripting Should be hands-on GitHub and Github actions for building different pipelines. Understanding Github auth model would be a plus. Working with containerization. (eg. Docker, AKS) Should be well versed about RBAC model of Azure services Proficient in Azure log analytics and App – insight handling through Kql queries
Posted 1 day ago
5.0 years
0 Lacs
Dharmapuri, Tamil Nadu, India
Remote
As a Senior Help Desk Technician at Lightcast, you will be a critical part of our IT support team, providing technical assistance and support to employees. This career-level role is designed for an experienced IT professional with a deep understanding of IT systems, excellent problem-solving skills, and a passion for delivering exceptional customer service. You will lead technical initiatives and mentor junior team members. Major Responsibilities: Technical Support: Resolve complex hardware, software, and system issues for end-users across platforms (Windows, macOS, Linux). Incident Management: Lead incident response, ensuring timely resolution and escalation when necessary. Knowledge Base: Contribute to and maintain documentation of known issues, best practices, and troubleshooting guides. Problem Ownership: Take initiative in resolving challenging technical problems, collaborating across IT teams as needed. Documentation: Accurately record all support interactions and resolutions in the helpdesk system. Security Compliance: Enforce and support company-wide IT security policies and compliance standards. Procurement & Licensing: Manage purchases of hardware/software, license renewals, and subscription tracking. Asset Management: Oversee inventory and lifecycle management of all IT assets. Skills/Abilities: 5+ years of hands-on IT support experience with a focus on troubleshooting and issue resolution. Strong knowledge of Windows OS and Microsoft Office; familiarity with macOS and Linux environments. Proven problem-solving abilities with strong attention to detail. Excellent communication and interpersonal skills. Experience with asset management and support tools (e.g., ticketing systems, remote support tools). Familiarity with cloud environments (AWS preferred) and infrastructure-as-code tools (e.g., Terraform, Pulumi). Knowledge of ITIL, ISO 27001, and accessibility standards (e.g., WCAG) is a plus. Proficiency in automation scripting (e.g., Python, PowerShell, JavaScript) is highly desirable. Education and Experience: Bachelor’s degree in IT, Computer Science, or a related field. IT certifications (e.g., CompTIA A+, Network+, Microsoft Certified) strongly preferred. 5+ years of experience in IT support, with a strong background in troubleshooting hardware and software issues. Lightcast is a global leader in labor market insights with headquarters in Moscow (ID) and Boston (MA) and offices in the United Kingdom, Europe, and India. We work with partners across six continents to help drive economic prosperity and mobility by providing the insights needed to build and develop our people, our institutions and companies, and our communities. Lightcast is proud to be an equal opportunity workplace and is committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Lightcast has always been, and always will be, committed to diversity, equity and inclusion. We seek dynamic professionals from all backgrounds to join our teams, and we encourage our employees to bring their authentic, original, and best selves to work. Show more Show less
Posted 1 day ago
1.0 years
0 - 0 Lacs
Indore
On-site
Responsibilities: Develop and maintain infrastructure as code (IaC) to support scalable and secure infrastructure. Collaborate with the development team to streamline and optimize the continuous integration and deployment pipeline. Manage and administer Linux systems, ensuring reliability and security. Configure and provision cloud resources on AWS, Google Cloud, or Azure as required. Implement and maintain containerized environments using Docker and orchestration with Kubernetes. Monitor system performance and troubleshoot issues to ensure optimal application uptime. Stay updated with industry best practices, tools, and DevOps methodologies. Enhance software development processes through automation and continuous improvement initiatives. Requirements: Degree(s): B.Tech/BE (CS, IT, EC, EI) or MCA. Eligibility: Open to 2021, 2022, and 2023 graduates and postgraduates only. Expertise in Infrastructure as Code (IaC) with tools like Terraform and CloudFormation. Proficiency in software development using languages such as Python, Bash, and Go. Experience in Continuous Integration with tools such as Jenkins, Travis CI, and CircleCI. Strong Linux system administration skills. Experience in provisioning, configuring, and managing cloud resources (AWS, Google Cloud Platform, or Azure). Excellent verbal and written communication skills. Experience with containerization and orchestration tools such as Docker and Kubernetes. Job Type: Full-time Pay: ₹45,509.47 - ₹85,958.92 per month Benefits: Health insurance Schedule: Day shift Ability to commute/relocate: Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Python: 1 year (Preferred) AI/ML: 1 year (Preferred) Location: Indore, Madhya Pradesh (Preferred) Work Location: In person
Posted 1 day ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
FICO (NYSE: FICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity “A DevOps role at FICO is an opportunity to work with cutting edge cloud technologies with a team focused on delivery of secure cloud solutions and products to enterprise customers.” - VP, DevOps Engineering What You’ll Contribute Design, implement, and maintain Kubernetes clusters in AWS environments. Develop and manage CI/CD pipelines using Tekton, ArgoCD, Flux or similar tools. Implement and maintain observability solutions (monitoring, logging, tracing) for Kubernetes-based applications. Collaborate with development teams to optimize application deployments and performance on Kubernetes. Automate infrastructure provisioning and configuration management using AWS services and tools. Ensure security and compliance in the cloud infrastructure. What We’re Seeking Proficiency in Kubernetes administration and deployment, particularly in AWS (EKS). Experience with AWS services such as EC2, S3, IAM, ACM, Route 53, ECR. Experience with Tekton for building CI/CD pipelines. Strong understanding of observability tools like Prometheus, Grafana or similar. Scripting and automation skills (e.g., Bash, GitHub workflows). Knowledge of cloud platforms and container orchestration. Experience with infrastructure as code tools (Terraform, CloudFormation). Knowledge of Helm. Understanding of security best practices in cloud and Kubernetes environments. Proven experience in delivering microservices and Kubernetes-based systems. Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO? At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today – Big Data analytics. You’ll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide: Credit Scoring — FICO® Scores are used by 90 of the top 100 US lenders. Fraud Detection and Security — 4 billion payment cards globally are protected by FICO fraud systems. Lending — 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICO’s solutions, placing us among the world’s top 100 software companies by revenue. We help many of the world’s largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people – just like you – who thrive on the collaboration and innovation that’s nurtured by a diverse and inclusive environment. We’ll provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at www.fico.com/Careers FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and we’re proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they don’t meet all stated qualifications. While our qualifications are clearly related to role success, each candidate’s profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to the FICO Privacy policy at https://www.fico.com/en/privacy-policy Show more Show less
Posted 1 day ago
14.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Devops Manager Location: Ahmedabad/Hyderabad Exp: 14+ years Experience Required: 14+ years total experience, with 4–5 years in managerial roles. Technical Knowledge and Skills: Mandatory: Cloud: GCP (Complete stack from IAM to GKE) CI/CD: End-to-end pipeline ownership (GitHub Actions, Jenkins, Argo CD) IaC: Terraform, Helm • Containers: Docker, Kubernetes • DevSecOps: Vault, Trivy, OWASP Nice to Have: FinOps exposure for cost optimization Big Data tools familiarity (BigQuery, Dataflow) Familiarity with Kong, Anthos, Istio Scope: Lead DevOps team across multiple pods and products Define roadmap for automation, security, and CI/CD Ensure operational stability of deployment pipelines Roles and Responsibilities: Architect and guide implementation of enterprise-grade CI/CD pipelines that support multi-environment deployments, microservices architecture, and zero downtime delivery practices. Oversee Infrastructure-as-Code initiatives to establish consistent and compliant cloud provisioning using Terraform, Helm, and policy-as-code integrations. Champion DevSecOps practices by embedding security controls throughout the pipeline—ensuring image scanning, secrets encryption, policy checks, and runtime security enforcement Lead and manage a geographically distributed DevOps team, setting performance expectations, development plans, and engagement strategies. • Drive cross-functional collaboration with engineering, QA, product, and SRE teams to establish integrated DevOps governance practices. Develop a framework for release readiness, rollback automation, change control, and environment reconciliation processes. Monitor deployment health, release velocity, lead time to recovery, and infrastructure cost optimization through actionable DevOps metrics dashboards Serve as the primary point of contact for C-level stakeholders during major infrastructure changes, incident escalations, or audits. Own the budgeting and cost management strategy for DevOps tooling, cloud consumption, and external consulting partnerships. Identify, evaluate, and onboard emerging DevOps technologies, ensuring team readiness through structured onboarding, POCs, and knowledge sessions. Foster a culture of continuous learning, innovation, and ownership—driving internal tech talks, hackathons, and community engagement Show more Show less
Posted 1 day ago
4.0 - 8.0 years
5 - 15 Lacs
Bengaluru
Work from Office
Job Summary: We are seeking a highly skilled MuleSoft Developer with 5-7 years of hands-on experience in designing, developing, and managing APIs and integrations using MuleSoft Anypoint Platform . The ideal candidate should have a strong background in enterprise integration patterns, API-led connectivity, and cloud-native architecture. You will work closely with cross-functional teams to deliver scalable integration solutions that meet the organization's strategic goals and technical standards. Key Responsibilities: Design, build, and maintain APIs using MuleSoft Anypoint Platform Develop and deploy MuleSoft applications in on-prem, cloud, or hybrid environments Create RAML-based API specifications , perform data transformations using DataWeave , and configure third-party system connectors Collaborate with business analysts, architects, and QA teams to define and implement integration solutions Develop reusable components and frameworks to support scalable integration architecture Participate in code reviews , unit testing , and CI/CD pipeline integration Implement error handling , logging , and monitoring Troubleshoot and optimize deployed MuleSoft applications Maintain documentation and follow best practices and security policies Required Skills Required Skills & Qualifications: 5-7 years of total development experience, with at least 3+ years of hands-on MuleSoft experience Proficiency with Mule 4.x (and knowledge of Mule 3.x) Experience in designing and implementing REST/SOAP APIs , using RAML , JSON , XML Strong knowledge of DataWeave , MUnit , Maven , and Anypoint Studio Familiarity with OAuth 2.0, JWT , and API security best practices including threat protection, rate limiting, and encryption. Experience with Git , Jenkins , and CI/CD processes MuleSoft Certification (Developer Level 1 or higher) is highly desirable Excellent problem-solving abilities, attention to detail, and commitment to quality. Nice to Have: Experience with Salesforce , SAP , Workday , or other enterprise systems Familiarity with Kafka , RabbitMQ , or similar messaging systems Exposure to cloud platforms like AWS, Azure, or GCP Understanding of DevOps practices and infrastructure-as-code tools like Terraform
Posted 1 day ago
13.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Release Manager – Tools & Infrastructure Location: Hyderabad Experience Level: 13 years+ Department: Engineering / DevOps Reporting To: Head of DevOps / Engineering Director About the Role: We are seeking a hands-on Release Manager with strong DevOps and Infrastructure knowledge to oversee software release pipelines, tooling, and automation processes across distributed systems. The ideal candidate will be responsible for managing releases, ensuring environment readiness, coordinating with engineering, SRE, and QA teams, and driving tooling upgrades and ecosystem health. This is a critical role that bridges the gap between development and operations—ensuring timely, stable, and secure delivery of applications across environments. Key Responsibilities: Release & Environment Management: Manage release schedules, timelines, and coordination with multiple delivery streams. Own the setup and consistency of lower environments and production cutover readiness. Ensure effective version control, build validation, and artifact management across CI/CD pipelines. Oversee rollback strategies, patch releases, and post-deployment validations. Toolchain Ownership: Manage and maintain DevOps tools such as Jenkins, GitHub Actions, Bitbucket, SonarQube, JFrog, Argo CD, and Terraform. Govern container orchestration through Kubernetes and Helm. Maintain secrets and credential hygiene through HashiCorp Vault and related tools. Infrastructure & Automation: Work closely with Cloud, DevOps, and SRE teams to ensure automated and secure deployments. Leverage GCP (VPC, Compute Engine, GKE, Load Balancer, IAM, VPN, GCS) for scalable infrastructure. Ensure adherence to infrastructure-as-code (IaC) standards using Terraform and Helm charts. Monitoring, Logging & Stability: Implement and manage observability tools such as Prometheus, Grafana, ELK, and Datadog. Monitor release impact, track service health post-deployment, and lead incident response if required. Drive continuous improvement for faster and safer releases by implementing lessons from RCAs. Compliance, Documentation & Coordination: Use Jira, Confluence, and ServiceNow for release planning, documentation, and service tickets. Implement basic security standards (OWASP, WAF, GCP Cloud Armor) in release practices. Conduct cross-team coordination with QA, Dev, CloudOps, and Security for aligned delivery. Show more Show less
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.
These cities are known for their strong tech presence and have a high demand for Terraform professionals.
The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.
In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.
Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.
plan
and apply
commands. (medium)As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.