Jobs
Interviews

215 Cloudformation Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Solution Architect in the Pre-Sales department, with 4-6 years of experience in cloud infrastructure deployment, migration, and managed services, your primary responsibility will be to design AWS Cloud Professional Services and AWS Cloud Managed Services solutions tailored to meet customer needs and requirements. You will engage with customers to analyze their requirements, ensuring cost-effective and technically sound solutions are provided. Your role will also involve developing technical and commercial proposals in response to various client inquiries such as Requests for Information (RFI), Requests for Quotation (RFQ), and Requests for Proposal (RFP). Additionally, you will prepare and deliver technical presentations to clients, highlighting the value and capabilities of AWS solutions. Collaborating closely with the sales team, you will work towards supporting their objectives and closing deals that align with business needs. Your ability to apply creative and analytical problem-solving skills to address complex challenges using AWS technology will be crucial. Furthermore, you should possess hands-on experience in planning, designing, and implementing AWS IaaS, PaaS, and SaaS services. Experience in executing end-to-end cloud migrations to AWS, including discovery, assessment, and implementation, is required. You must also be proficient in designing and deploying well-architected landing zones and disaster recovery environments on AWS. Excellent communication skills, both written and verbal, are essential for effectively articulating solutions to technical and non-technical stakeholders. Your organizational, time management, problem-solving, and analytical skills will play a vital role in driving consistent business performance and exceeding targets. Desirable skills include intermediate-level experience with AWS services like AppStream, Elastic BeanStalk, ECS, Elasticache, and more, as well as IT orchestration and automation tools such as Ansible, Puppet, and Chef. Knowledge of Terraform, Azure DevOps, and AWS development services will be advantageous. In this role based in Noida, Uttar Pradesh, India, you will have the opportunity to collaborate with technical and non-technical teams across the organization, ensuring scalable, efficient, and secure solutions are delivered on the AWS platform.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

We are seeking a Full Stack Developer with at least 5+ years of experience for our team, and we are currently only considering applicants based in India. About the Role As a Full Stack Developer, you will be responsible for developing reliable and high-performance applications that support our AI-driven construction platform. Your expertise in Java Spring Boot, microservices architecture, PDF processing, and AWS DevOps will be essential for this role. Key Responsibilities - Backend Development: Design and implement scalable microservices using Java Spring Boot for optimized performance and maintainability. - PDF Document Processing: Build modules for extracting, processing, and managing PDF documents related to construction plans, contracts, and specifications using Python. - Front-End Integration: Collaborate with frontend engineers to ensure seamless communication with backend services through REST APIs or GraphQL. - Cloud Architecture & Deployment: Deploy and manage services on AWS using DevOps best practices, including containerization (Docker), orchestration (Kubernetes/ECS), and CI/CD pipelines (GitHub Actions, CodePipeline). - Database & Data Flow: Design data models using PostgreSQL and MongoDB, and manage data pipelines and integrations across services. - Security & Scalability: Implement access controls, encryption standards, and secure API endpoints to support enterprise-level deployments. - Cross-Team Collaboration: Work with AI/ML engineers, product managers, and domain experts to develop backend services that support AI features like document understanding and risk analysis. Required Skills & Qualifications Technical Skills - Strong programming skills in Java, with hands-on experience in Spring Boot and microservices architecture. - Experience in processing and managing data from PDFs using tools like Apache PDFBox, iText, or similar libraries. - Proficiency in designing and consuming RESTful APIs or GraphQL APIs. - Experience with AWS services like EC2, S3, Lambda, API Gateway, CloudWatch, and RDS. - Hands-on experience with Docker, CI/CD pipelines, and infrastructure automation (e.g., Terraform, CloudFormation). - Familiarity with PostgreSQL, MongoDB, and distributed caching mechanisms (e.g., Redis). - Understanding of authentication and security principles (OAuth2, JWT, etc.). - Exposure to AI/ML model consumption via APIs (e.g., OpenAI, SageMaker). Soft Skills - Ability to work independently and take full ownership of backend services. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration in agile cross-functional teams. - Passion for delivering high-quality, reliable, and scalable solutions. Beyond Technical Skills - What We're Looking For We are seeking individuals who thrive in a fast-paced, high-ownership environment and value the following: Startup Readiness & Ownership - Bias for action: Ability to ship fast, test quickly, and iterate purposefully. - Comfort with ambiguity: Make decisions with limited information and adapt as needed. - Ownership mindset: Treat the product as your own, not just a list of tasks to complete. - Resourcefulness: Know when to hack solutions and when to build them properly. Product Thinking - User-Centric Approach: Care about the user's perspective and the purpose behind what you're building. - Collaborative in Shaping Product: Comfortable challenging and refining product specifications. - Strategic Trade-off Awareness: Navigate choices such as speed vs scalability, UX vs tech debt, MVP vs V1 with clarity. Collaboration & Communication - Cross-Functional Comfort: Work effectively with product, design, and founders. - Clear communicator: Explain technical concepts in simple terms when necessary. - Feedback culture fit: Give and receive feedback constructively. Growth Potential - Fast Learner: Willingness to adapt to changing environments and technologies. - Long-Term Mindset: Opportunities for growth and development. - Mentorship Readiness: Support team growth as the team expands. Startup Cultural Fit - Mission-Driven: Care deeply about the impact of your work. - Flexible Work Style: Embrace remote-friendly and hybrid working environments. - No big-company baggage: Thrive in a fast-paced, collaborative environment. Why Join Our Startup - Shape the Future of construction technology through intelligent automation and smart workflows. - Ownership & Impact: See the direct results of your work in a high-impact startup setting. - Competitive Package: Receive a market-aligned salary and performance-based incentives. - Remote Flexibility: Enjoy a hybrid/remote-friendly work culture. - Work with Experts: Collaborate with leaders in AI, construction, and cloud-native development.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

jodhpur, rajasthan

On-site

As a Full-time Backend Developer, you should possess over 5 years of experience working with Python and have hands-on experience with frameworks such as Flask, Django, or FastAPI. Your proficiency in AWS services, including Lambda, S3, SQS, and CloudFormation, is crucial. Additionally, experience with relational databases like PostgreSQL or MySQL is required. Familiarity with testing frameworks like Pytest or NoseTest, expertise in REST API development, JWT authentication, and proficiency in version control tools such as Git are essential for this role. For the Frontend Developer position, you should have more than 3 years of experience with ReactJS and a strong understanding of its core principles. Experience with state management tools like Redux Thunk, Redux Saga, or Context API is necessary. Familiarity with RESTful APIs, modern front-end build pipelines, and tools is expected. Proficiency in HTML5, CSS3, and pre-processing platforms like SASS/LESS is required. You should also have experience with modern authorization mechanisms like JSON Web Tokens (JWT) and be familiar with front-end testing libraries such as Cypress, Jest, or React Testing Library. Experience in developing shared component libraries is a plus for this role.,

Posted 1 day ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

As a Release Engineer - Intermediate at Gentrack, you will play a crucial role in bridging the gap between development and platform operations. Your primary responsibility will be to enable continuous integration, deployment, and automation across our development and production environments. This includes working on development and configuration-related tasks, enhancing processes, automating activities, and ensuring the scalability and reliability of our systems. You will work closely with software developers, platforms, testers, and other IT teams to manage code releases effectively. Your focus will be on implementing and managing CI/CD pipelines to streamline the release process. Additionally, you will be involved in development tasks and collaborate with various teams to drive successful releases. We are looking for a candidate with 4 to 6 years of experience, including at least 2 years of hands-on experience in Release Engineering, DevOps, system administration, or software development. Proficiency in cloud platforms, especially AWS (EC2, S3), is essential. You should have a strong understanding of version control systems like Git and branching strategies. Experience with CI/CD tools such as Jenkins is also required. It would be beneficial if you have experience with automation tools like Ansible, Terraform, or CloudFormation, as well as containerization technologies like Docker and Kubernetes. However, we are open to coaching and collaborating with individuals who are eager to learn and grow in these areas. At Gentrack, we value Respect for the Planet and encourage our team members to promote sustainability initiatives aligned with our Sustainability Charter. By actively engaging in our global sustainability programs, you will contribute to creating a positive impact on society and the environment. Joining Gentrack means being part of a dynamic and supportive team that fosters personal growth, professional development, and technical excellence. You will have the opportunity to work in a global, high-growth organization with a clear career path. Our vibrant culture is driven by passionate individuals dedicated to transformation and positive change. We offer a competitive reward package to recognize and reward top talent, giving you a chance to make a meaningful difference in society and the world. If you are enthusiastic about learning, eager to contribute to our mission, and committed to continuous improvement, we welcome you to join our team at Gentrack and be part of our journey towards innovation and sustainability.,

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

You are an experienced DevOps Engineer with over 8 years of experience, looking for an opportunity to join a team that values building and maintaining scalable AWS infrastructure, automating deployments, and ensuring high system reliability. Your key responsibilities will include designing and implementing scalable and reliable AWS infrastructure, developing and maintaining CI/CD pipelines using Jenkins or GitHub Actions, building, operating, and maintaining Kubernetes clusters (including Helm charts), using Terraform or CloudFormation for Infrastructure as Code, automating system tasks using Python or GoLang, collaborating with developers and SREs to ensure resilient and efficient systems, and monitoring and troubleshooting performance using Datadog, Prometheus, and Grafana. To excel in this role, you should possess hands-on expertise in Docker & Kubernetes at a production level, a strong knowledge of CI/CD tools such as Jenkins or GitHub Actions, proficiency in Terraform or CloudFormation, solid scripting skills in Python or GoLang, experience with observability tools like Datadog, Prometheus, and Grafana, strong problem-solving and collaboration skills, and a Bachelor's degree in Computer Science, IT, or a related field. Certifications in Kubernetes or Terraform would be a plus. If you are ready to take on this exciting opportunity, apply now by sending your resume to preeti.verma@qplusstaffing.com.,

Posted 1 day ago

Apply

10.0 - 14.0 years

0 Lacs

haryana

On-site

As a Digital Product Engineering company, Nagarro is seeking a talented individual to join our dynamic and non-hierarchical work culture as a Data Engineer. With over 17500 experts across 39 countries, we are scaling in a big way and are looking for someone with 10+ years of total experience to contribute to our team. **Requirements:** - The ideal candidate should possess strong working experience in Data Engineering and Big Data platforms. - Hands-on experience with Python and PySpark is required. - Expertise with AWS Glue, including Crawlers and Data Catalog, is essential. - Experience with Snowflake and a strong understanding of AWS services such as S3, Lambda, Athena, SNS, and Secrets Manager are necessary. - Familiarity with Infrastructure-as-Code (IaC) tools like CloudFormation and Terraform is preferred. - Strong experience with CI/CD pipelines, preferably using GitHub Actions, is a plus. - Working knowledge of Agile methodologies, JIRA, and GitHub version control is expected. - Exposure to data quality frameworks, observability, and data governance tools and practices is advantageous. - Excellent communication skills and the ability to collaborate effectively with cross-functional teams are essential for this role. **Responsibilities:** - Writing and reviewing high-quality code to meet technical requirements. - Understanding clients" business use cases and converting them into technical designs. - Identifying and evaluating different solutions to meet clients" requirements. - Defining guidelines and benchmarks for Non-Functional Requirements (NFRs) during project implementation. - Developing design documents explaining the architecture, framework, and high-level design of applications. - Reviewing architecture and design aspects such as extensibility, scalability, security, design patterns, user experience, and NFRs. - Designing overall solutions for defined functional and non-functional requirements and defining technologies, patterns, and frameworks. - Relating technology integration scenarios and applying learnings in projects. - Resolving issues raised during code/review through systematic analysis of the root cause. - Conducting Proof of Concepts (POCs) to ensure suggested designs/technologies meet requirements. **Qualifications:** - Bachelors or master's degree in computer science, Information Technology, or a related field is required. If you are passionate about Data Engineering, experienced in working with Big Data platforms, proficient in Python and PySpark, and have a strong understanding of AWS services and Infrastructure-as-Code tools, we invite you to join Nagarro and be part of our innovative team.,

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Python Backend Engineer specializing in AWS with a focus on GenAI & ML, you will be responsible for designing, developing, and maintaining intelligent backend systems and AI-driven applications. Your primary objective will be to build and scale backend systems while integrating AI/ML models using Django or FastAPI. You will deploy machine learning and GenAI models with frameworks like TensorFlow, PyTorch, or Scikit-learn, and utilize Langchain for GenAI pipelines. Experience with LangGraph will be advantageous in this role. Collaboration with data scientists, DevOps, and architects is essential to integrate models into production. You will be working with AWS services such as EC2, Lambda, S3, SageMaker, and CloudFormation for infrastructure and deployment purposes. Additionally, managing CI/CD pipelines for backend and model deployments will be a key part of your responsibilities. Ensuring the performance, scalability, and security of applications in cloud environments will also fall under your purview. To be successful in this role, you should have at least 5 years of hands-on experience in Python backend development and a strong background in building RESTful APIs using Django or FastAPI. Proficiency in AWS cloud services is crucial, along with a solid understanding of ML/AI concepts and model deployment practices. Familiarity with ML libraries like TensorFlow, PyTorch, or Scikit-learn is required, as well as experience with Langchain for GenAI applications. Experience with DevOps tools such as Docker, Kubernetes, Git, Jenkins, and Terraform will be beneficial. An understanding of microservices architecture, CI/CD workflows, and agile development practices is also desirable. Nice to have skills include knowledge of LangGraph, LLMs, embeddings, and vector databases, as well as exposure to OpenAI APIs, AWS Bedrock, or similar GenAI platforms. Additionally, familiarity with MLOps tools and practices for model monitoring, versioning, and retraining will be advantageous. This is a full-time, permanent position with benefits such as health insurance and provident fund. The work location is in-person, and the schedule involves day shifts from Monday to Friday in the morning. If you are interested in this opportunity, please contact the employer at +91 9966550640.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

udaipur, rajasthan

On-site

We are looking for a skilled DevOps Engineer to join our technology team and play a key role in automating and enhancing development operations. Your responsibilities will include designing and implementing CI/CD pipelines to facilitate quick development and deployment, automating infrastructure provisioning using tools such as Terraform, CloudFormation, or Ansible, monitoring system performance, managing cloud infrastructure on AWS/Azure/GCP, and ensuring system reliability and scalability by implementing containerization and orchestration using Docker and Kubernetes. You will collaborate with various teams including developers, QA, and security to streamline software delivery, maintain configuration management and version control using Git, and ensure system security through monitoring, patch management, and vulnerability scans. Additionally, you will be responsible for assisting with system backups, disaster recovery plans, and rollback strategies. The ideal candidate should have a strong background in CI/CD tools like Jenkins, GitLab CI, or CircleCI, proficiency in cloud services (AWS, Azure, or GCP), experience in Linux system administration and scripting (Bash, Python), and hands-on experience with Docker and Kubernetes in production environments. Familiarity with monitoring/logging tools such as Prometheus, Grafana, ELK, and CloudWatch, as well as good knowledge of networking, DNS, load balancers, and firewalls are essential. Preferred qualifications for this role include a Bachelor's degree in Computer Science, IT, or a related field, DevOps certifications (e.g., AWS Certified DevOps Engineer, CKA/CKAD, Terraform Associate), experience in MLOps, Serverless architectures, or microservices, and knowledge of security practices in cloud and DevOps environments. If you are passionate about DevOps and have the required skills and experience, we would like to hear from you.,

Posted 2 days ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

The role at NiCE involves evaluating current and emerging technologies, collaborating with DevOps and other business units to establish and ensure the implementation of best practices. As a Cloud Network Engineer, you will work with various cloud providers such as AWS, Azure, GCP, and inContact's private cloud environments. Your responsibilities will include researching and evaluating Cloud technologies, establishing design strategies and automation, reviewing designs and implementation plans, serving as a technical lead on projects, and communicating technical information to various stakeholders. You will also collaborate with colleagues, customers, vendors, and other parties to develop architectural solutions, understand existing systems and processes, and participate in the evaluation and selection of solutions or products. To excel in this role, you should have at least 8 years of work experience in an internetworking environment, experience with Cloud technologies like AWS, Azure, and GCP, expertise in Infrastructure as code and scripting with JSON/YMAL for CloudFormation. Additionally, you should have expert-level experience with Palo Alto and F5 load balancers, network switching and routing, and extensive knowledge of networking technologies, topologies, and protocols. The role offers the opportunity to work in a fast-paced, collaborative, and creative environment at a market-leading global company. With endless internal career opportunities across multiple roles and locations, NICE provides a chance to learn, grow, and innovate continuously. The NICE-FLEX hybrid model allows for maximum flexibility with a combination of office and remote work days, fostering teamwork, innovation, and a vibrant atmosphere. If you are passionate, innovative, and eager to push boundaries, you may just be the next valuable addition to the NiCE team! Requisition ID: 7944 Reporting into: Manager, Cloud Operations Role Type: Individual Contributor About NiCE: NICELtd. (NASDAQ: NICE) software products are utilized by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver exceptional customer experiences, combat financial crime, and ensure public safety. With over 8,500 employees across 30+ countries, NiCE is recognized as an innovation powerhouse excelling in AI, cloud, and digital domains.,

Posted 2 days ago

Apply

0.0 - 3.0 years

0 Lacs

indore, madhya pradesh

On-site

As an AWS Engineer with 6 months to 2 years of experience, you will be responsible for managing AWS infrastructure and collaborating with senior engineers to optimize cloud environments. Your role will involve deploying and configuring AWS resources, automating tasks through scripting, ensuring security and compliance, optimizing system performance, and providing support to internal teams and users. Additionally, you will be required to create and maintain comprehensive documentation, analyze reports on system performance, and participate in designing cloud solutions based on business requirements. Your primary duties and responsibilities will include: - Managing AWS infrastructure by deploying and configuring resources like EC2 instances, S3 buckets, and RDS databases. - Monitoring and maintaining AWS infrastructure to ensure high performance, availability, and scalability. - Troubleshooting and resolving issues related to AWS services and applications. - Automating infrastructure tasks using AWS tools such as CloudFormation, Lambda, and IAM. - Writing and maintaining scripts in languages like Python or Bash to streamline operations and improve efficiency. - Implementing security best practices, including identity and access management, data encryption, and network security. - Ensuring compliance with organizational policies and industry standards within the AWS environment. - Analyzing and optimizing AWS infrastructure for cost efficiency, performance, and scalability. - Collaborating with senior AWS engineers, IT teams, and stakeholders to understand business requirements and design cloud solutions. - Providing support to internal teams and users for AWS-related inquiries and issues. In terms of qualifications, you should have: - 6 months to 2 years of experience in AWS or cloud computing. - Hands-on experience with AWS services such as EC2, S3, RDS, and VPC. - Proficiency in AWS services and tools, including CloudFormation, IAM, and Lambda. - Strong understanding of networking concepts, Linux/Windows server administration, and scripting languages (e.g., Python, Bash). - Familiarity with monitoring and logging tools such as CloudWatch. Soft skills such as problem-solving, communication, teamwork, time management, and eagerness to learn new technologies will also be essential for success in this role. This position is based in Indore, Madhya Pradesh, and requires a willingness to work on-site at various locations in India and globally, including the USA, UK, Australia, and the Middle East. On-site benefits will be applicable to the selected candidate. If you meet the qualifications and are ready to take on this exciting opportunity, we encourage you to apply from the specified locations. Your expertise as an AWS Engineer will play a crucial role in our IT Operations team, contributing to the success of our cloud-based systems and applications.,

Posted 2 days ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

You are seeking a skilled DevSecOps Developer with over 6 years of experience to join our team in Gurgaon on a full-time basis. As a DevSecOps Developer, your primary responsibility will be to independently oversee the entire DevSecOps lifecycle encompassing development, security, and cloud operations. You will be expected to drive initiatives, implement best practices, and manage a hybrid cloud infrastructure while upholding security throughout the software delivery pipeline. Your duties will include owning and managing the complete DevSecOps lifecycle, encompassing aspects such as CI/CD, security integration, infrastructure automation, monitoring, and incident management. Additionally, you will be tasked with designing, implementing, and sustaining secure and scalable hybrid cloud infrastructure, integrating security practices into CI/CD pipelines, implementing infrastructure as code (IaC), managing containerization and orchestration platforms, ensuring compliance with security standards, and monitoring systems for performance and security. To excel in this role, you should possess a minimum of 6 years of DevOps/DevSecOps experience, expertise in CI/CD tools, proficiency in at least one cloud platform, experience with infrastructure automation and configuration management tools, strong scripting skills, a deep understanding of security principles, familiarity with container technologies and orchestration, as well as excellent problem-solving, communication, and self-management skills. If you are a proactive individual with a proven track record of handling environments independently, adept at collaborating with cross-functional teams, and passionate about maintaining security throughout the software development lifecycle, we encourage you to apply for this challenging and rewarding position.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

We are seeking a Full Stack Developer with at least 5+ years of experience, and at present, we are considering applications only from candidates based in India. About the Role As a Full Stack Developer, you will be responsible for developing robust and high-performing applications that drive our AI-driven construction platform. Your expertise in Java Spring Boot, microservices architecture, PDF processing, and AWS DevOps will be integral to the role. Key responsibilities include backend development, PDF document processing, front-end integration, cloud architecture, database management, security implementation, and collaboration with cross-functional teams. Key Responsibilities Backend Development: Create scalable microservices using Java Spring Boot for optimal performance and maintainability. PDF Document Processing: Develop modules for extracting, processing, and managing PDF documents utilizing tools like Python. Front-End Integration: Ensure seamless communication between frontend and backend services through REST APIs or GraphQL. Cloud Architecture & Deployment: Deploy and manage services on AWS following DevOps best practices such as containerization, orchestration, and CI/CD pipelines. Database & Data Flow: Design data models using PostgreSQL and MongoDB, and manage data pipelines across services. Security & Scalability: Implement access controls, encryption standards, and secure API endpoints for enterprise-level deployments. Cross-Team Collaboration: Collaborate with AI/ML engineers, product managers, and domain experts to support AI features. Required Skills & Qualifications Technical Skills - Proficiency in Java programming, Spring Boot, and microservices architecture. - Experience with PDF processing tools like Apache PDFBox, iText, or similar libraries. - Knowledge of designing and consuming RESTful or GraphQL APIs. - Hands-on experience with AWS services and infrastructure automation tools. - Familiarity with databases like PostgreSQL, MongoDB, and caching mechanisms like Redis. - Understanding of authentication and security principles such as OAuth2 and JWT. - Exposure to consuming AI/ML models via APIs like OpenAI or SageMaker. Soft Skills - Ability to work independently and take ownership of backend services. - Strong problem-solving skills and attention to detail. - Effective communication and collaboration within agile teams. - Dedication to delivering high-quality and scalable solutions. Startup Readiness & Ownership - Bias for action, comfort with ambiguity, ownership mindset, and resourcefulness. Product Thinking - User-centric approach, collaborative product shaping, and strategic trade-off awareness. Collaboration & Communication - Comfort working in cross-functional teams, clear communicator, and feedback culture fit. Growth Potential - Willingness to learn, long-term mindset, and mentorship readiness. Startup Cultural Fit - Mission-driven, flexible work style, and no big-company baggage. Why Join our Startup - Contribute to shaping the future of construction technology through intelligent automation. - Experience ownership and impact in a fast-paced startup environment. - Competitive package, remote flexibility, and collaboration with industry experts.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a skilled systems engineer, you will be responsible for designing, implementing, and maintaining scalable and highly available systems and infrastructure. Your primary focus will be to monitor, troubleshoot, and resolve incidents to ensure optimal performance and availability of applications. You should have experience in Multiple DC, Migration strategy (AWS), and a solid grasp of cloud-native concepts. Your duties will also include developing and maintaining documentation related to system architecture, configuration, and processes. In addition, you will participate in on-call rotations to provide support for production systems and handle critical incidents. It is essential that you continuously evaluate and enhance systems, processes, and tools to improve reliability. To excel in this role, you must possess strong troubleshooting and problem-solving skills. Effective communication and collaboration abilities are crucial, along with the ability to work in a fast-paced and dynamic environment. Proficiency in programming languages like Python, Java, or Go is required, as well as knowledge of database technologies such as SQL and NoSQL. Hands-on experience with tools like Kafka, Kong, and Nginx is a must, along with familiarity with monitoring and logging tools like Prometheus, Splunk, ELKG stack, or similar. A strong foundation in Linux/Unix systems fundamentals is expected, as well as experience with CI/CD pipelines and related tools like Jenkins, GitLab CI/CD. Experience with cloud platforms such as AWS, Azure, or Google Cloud, along with relevant certifications, will be beneficial. You should also have practical knowledge of containerization technologies like Docker and Kubernetes, as well as infrastructure-as-code tools such as Terraform or CloudFormation. Proficiency in HTML5, CSS3, and JavaScript (ES6+), along with a solid understanding of modern frontend frameworks/libraries like React, Angular, or Vue.js, is required. Knowledge of Node.js for server-side JavaScript development and familiarity with responsive design principles and mobile-first development are essential. Having experience with state management libraries/tools, RESTful APIs, asynchronous programming, version control systems like Git, and UI/UX principles and design tools (Sketch, Figma, etc.) will be advantageous. Familiarity with browser compatibility issues, performance optimization techniques, testing frameworks (Jest, Enzyme, Cypress), CI/CD pipelines, automated testing, and web security principles is also valued for this role.,

Posted 3 days ago

Apply

4.0 - 8.0 years

0 Lacs

chandigarh

On-site

Adventus.io is a B2B2C SaaS-based marketplace supporting institutions, recruiters, and students within the international student placement sector. Our innovative platform allows institutions, recruiters, and students to directly connect with one another, resulting in matching the right international study experience with students across the world. Founded in 2018, we are on a mission to change the way the world accesses international education. Behind the technology, we have over 500 amazingly talented humans making it all happen. We are looking for ambitious self-starters who want to be part of our vision and create a positive legacy. You will work in an agile environment alongside application developers on a vast array of initiatives as we deploy exciting new application features to AWS hosted environments. A portion of your time will be spent assisting the Data Analytics team in building our big data collection and analytics capabilities to uncover customer, product, and operational insights. Collaborate with other Software Engineers & Data Engineers to evaluate and identify optimal cloud architectures for custom solutions. You will design, build, and deploy AWS applications at the direction of other architects including data processing, statistical modeling, and advanced analytics. Design for scale, including systems that auto-scale and auto-heal. Via automation, you will relentlessly strive to eliminate manual toil. Maintain cloud stacks utilized in running our custom solutions, troubleshoot infrastructure-related issues causing solution outage or degradation, and implement necessary fixes. You will implement monitoring tools and dashboards to evaluate health, usage, and availability of custom solutions running in the cloud. Assist with building, testing, and maintaining CI/CD pipelines, infrastructure, and other tools to allow for the speedy deployment and release of solutions in the cloud. Consistently improve the current state by regularly reviewing existing cloud solutions and making recommendations for improvements (such as resiliency, reliability, autoscaling, and cost control), and incorporating modern infrastructure as code deployment practices using tools such as CloudFormation, Terraform, Ansible, etc. Identify, analyze, and resolve infrastructure vulnerabilities and application deployment issues. You will collaborate with our Security Guild members to implement company-preferred security and compliance policies across the cloud infrastructure running our custom solutions. Build strong cross-functional partnerships. This role will interact with business and engineering teams, representing many different types of personalities and opinions. Minimum 4+ years of work experience as a DevOps Engineer building AWS cloud solutions. Strong experience in deploying infrastructure as code using tools like Terraform and CloudFormation. Strong experience working with AWS services like ECS, EC2, RDS, CloudWatch, Systems Manager, EventBridge, ElastiCache, S3, and Lambda. Strong scripting experience with languages like Bash and Python. Understanding of Full Stack development. Proficiency with GIT. Experience in container orchestration (Kubernetes). Implementing CI/CD pipeline in the project. Sustained track record of making significant, self-directed, and end-to-end contributions to building, monitoring, securing, and maintaining cloud-native solutions, including data processing and analytics solutions through services such as Segment, BigQuery, and Kafka. Exposure to the art of ETL, automation tools such as AWS Glue, and presentation layer services such as Data Studio and Tableau. Knowledge of web services, API, and REST. Exposure to deploying applications and microservices written in programming languages such as PHP and NodeJS to AWS. A belief in simple solutions (not easy solutions) and can accept consensus even when you may not agree. Strong interpersonal skills, you communicate technical details articulately and have demonstrable creative thinking and problem-solving abilities with a passion for learning new technologies quickly.,

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

Are you ready to innovate with AWS in Bengaluru Infosys is looking for an AWS DevOps Engineer with a solid foundation in cloud infrastructure and automation to join our team. If you thrive in a dynamic environment where your skills can make a real impact, we want you! Design and develop scalable infrastructure using AWS services. Automate deployment, monitoring, and management processes. Implement and manage CI/CD pipelines to ensure smooth software delivery. Collaborate with cross-functional teams to define, design, and ship new features. Participate in code reviews and provide feedback to improve code quality. Required Skills: - 3-5 years of experience in AWS cloud infrastructure and DevOps practices. - Strong expertise in AWS services like EC2, S3, RDS, Lambda, and CloudFormation. - Experience with automation tools such as Terraform or AWS CloudFormation. - Familiarity with CI/CD tools like Jenkins, GitLab CI, AWS Code Pipeline, and Docker/Kubernetes. - Excellent problem-solving abilities and communication skills.,

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You will be joining Kodo, a company dedicated to simplifying the CFO stack for fast-growing businesses through a single platform that streamlines all purchase operations. Trusted by renowned companies like Cars24, Mensa Brands, and Zetwerk, Kodo empowers teams with flexible corporate processes and real-time insights integrated with ERPs. With $14M raised from investors like Y Combinator and Brex, Kodo is on a mission to provide exceptional products, a nurturing environment for its team, and profitable growth. As a Dev Ops Engineer at Kodo, your primary responsibility will be to contribute to building and maintaining a secure and scalable fintech platform. You will collaborate with the engineering team to implement security best practices throughout the software development lifecycle. This role requires hands-on experience with various tools and technologies such as Git, Linux, CI/CD tools (Jenkins, Github Actions), infra as code tools (Terraform, Cloudformation), scripting/programming languages (bash, Python, Node.js, Golang), Docker, Kubernetes, microservices paradigms, L4 and L7 load balancers, SQL/NoSQL databases, Azure cloud, and architecting 3-tier applications. Your key responsibilities will include implementing and enhancing logging, monitoring, and alerting systems, building and maintaining highly available production systems, optimizing applications for speed and scalability, collaborating with team members and stakeholders, and demonstrating a passion for innovation and product excellence. Experience with fintech security, CI/CD pipelines, cloud security tools like CloudTrail and CloudGuard, and security automation tools such as SOAR will be considered a bonus. To apply for this full-time position, please send your resume and cover letter to jobs@kodo.in. Besides a competitive salary and benefits package, we are looking for a proactive, security-conscious, problem-solving Dev Ops Engineer who can communicate complex technical concepts effectively, work efficiently under pressure, and demonstrate expertise in threat modeling, risk management, and vulnerability assessment and remediation.,

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

You are a skilled AWS Developer sought to provide part-time, offline freelance support for our expanding tech team. Your role involves leveraging your hands-on experience with AWS services to tackle technical challenges in cloud infrastructure and development. It is essential to work on-site in Medavakkam, Chennai, collaborating closely with our team to deliver effective solutions. Your responsibilities will encompass designing, implementing, and managing scalable and secure cloud infrastructure solutions utilizing AWS services like EC2, S3, RDS, Lambda, among others. You will also be tasked with developing and deploying applications on AWS to ensure optimal performance and scalability. Identifying and resolving issues related to AWS services and integrating these services with existing systems will be crucial aspects of your role. Furthermore, maintaining clear and comprehensive documentation for implemented solutions, configurations, and processes, as well as providing technical guidance to support project goals, are key responsibilities. To excel in this position, you must have proven experience as an AWS Developer, a strong understanding of AWS services and cloud computing concepts, proficiency in AWS tools and services such as EC2, S3, IAM, RDS, Lambda, CloudFormation, and scripting languages like Python and Bash. Your problem-solving skills, analytical mindset, and ability to troubleshoot complex technical issues are vital. Excellent communication skills are also required to collaborate effectively with team members and provide technical support. You should hold a degree in Computer Science, Engineering, or a related field, or possess equivalent practical experience. This part-time job opportunity based in Medavakkam, Chennai, requires 7.5 hours per week, and necessitates a minimum of 6 years of experience in AWS and 7 years of total work experience.,

Posted 3 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Lead / Staff Software Engineer in Black Duck SRE team, you will play a key role in transforming our R&D products through the adoption of advanced cloud, Containerization, Microservices, modern software delivery and other cutting edge technologies. You will be a key member of the team, working independently to develop tools and scripts, automated provisioning, deployment, and monitoring. The position is based in Bangalore (Near Dairy Circle Flyover) with a Hybrid work mode. Key Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - Minimum of 5-7 years of experience in Site Reliability Engineering / DevOps Engineering. - Strong hands-on experience with Containerization & Orchestration using Docker, Kubernetes (K8s), Helm to Secure, optimize, and scale K8s. - Deep understanding of Cloud Platforms & Services in AWS / GCP / Azure (Preferably GCP) cloud to Optimize cost, security, and performance. - Solid experience with Infrastructure as Code (IaC) using Terraform / CloudFormation / Pulumi (Preferably Terraform) - Write modules, manage state. - Proficient in Scripting & Automation using Bash, Python / Golang - Automate tasks, error handling. - Experienced in CI/CD Pipelines & GitOps using Git / GitHub / GitLab / Bitbucket / ArgoCD, Harness.io - Implement GitOps for deployments. - Strong background in Monitoring & Observability using Prometheus / Grafana / ELK Stack / Datadog / New Relic - Configure alerts, analyze trends. - Good understanding in Networking & Security using Firewalls, VPN, IAM, RBAC, TLS, SSO, Zero Trust - Implement IAM, TLS, logging. - Experience with Backup & Disaster Recovery using Velero, Snapshots, DR Planning - Implement backup solutions. - Basic Understanding messaging concepts using RabbitMQ / Kafka / Pub,Sub / SQS. - Familiarity with Configuration Management using Ansible / Chef / Puppet / SaltStack - Run existing playbooks. Key Responsibilities: - Design and develop scalable, modular solutions that promote reuse and are easily integrated into our diverse product suite. - Collaborate with cross-functional teams to understand their needs and incorporate user feedback into the development. - Establish best practices for modern software architecture, including Microservices, Serverless computing, and API-first strategies. - Drive the strategy for Containerization and orchestration using Docker, Kubernetes, or equivalent technologies. - Ensure the platform's infrastructure is robust, secure, and compliant with industry standards. What We Offer: - An opportunity to be a part of a dynamic and innovative team committed to making a difference in the technology landscape. - Competitive compensation package, including benefits and flexible work arrangements. - A collaborative, inclusive, and diverse work environment where creativity and innovation are valued. - Continuous learning and professional development opportunities to grow your expertise within the industry.,

Posted 3 days ago

Apply

5.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

You will be responsible for utilizing the best DevOps practices to optimize the software development process. This includes system administration, design, construction, and operation of container platforms such as Kubernetes, as well as expertise in container technologies like Docker and their management systems. Your role will also involve working with cloud-based monitoring, alerting, and observability solutions, and possessing in-depth knowledge of developer workflows with Git. Additionally, you will be expected to document processes, procedures, and best practices, and demonstrate strong troubleshooting and problem-solving skills. Your proficiency in Network Fundamentals, Firewalls, and ingress/egress Patterns, as well as experience in security configuration Management and DevSecOps, will be crucial for this position. You should have hands-on experience with Linux, CI/CD Tools (Pipelines, GitHub, GitHub Actions/Jenkins), and Configuration Management/Infrastructure as Code tools like CloudFormation, Terraform, and Cloud technologies such as VMware, AWS, and Azure. Your responsibilities will also include build automation, deployment configuration, and enabling product automation scripts to run in CI. You will be required to design, develop, integrate, and deploy CI/CD pipelines, collaborate closely with developers, project managers, and other teams to analyze requirements, and resolve software issues. Moreover, your ability to lead the development of infrastructure using open-source technologies like Elasticsearch, Grafana, and homegrown tools such as React and Python will be highly valued. Minimum Qualifications: - Graduate/master's degree in computer science, Engineering, or related discipline - 5 to 10 years of overall DevOps/Related experience - Good written and verbal communication skills - Ability to manage and prioritize multiple tasks while working both independently and within a team - Knowledge of software test practices, software engineering, and Cloud Technologies discipline - Knowledge/Working experience with Static Code Analysis, License Check Tools, and other Development Process Improvement Tools Desired Qualifications: - Minimum 4 years of working experience in AWS, Kubernetes, Helm, Docker-related technologies - Providing visibility into cloud spending and usage across the organization - Generating and interpreting reports on cloud expenditure, resource utilization, and usage optimization - Network Fundamentals: AWS VPC, AWS VPN, Firewalls, and ingress/egress Patterns - Knowledge/Experience with embedded Linux and RTOS (e.g. ThreadX, FreeRTOS) development on ARM based projects - Domain Knowledge on Cellular wireless and WiFi is an asset - Knowledge of distributed systems, networking, AMQP/MQTT, Linux, cloud security, and Python.,

Posted 3 days ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

We empower our people to stay resilient and relevant in a constantly changing world. We are looking for individuals who are always seeking creative ways to grow and learn, individuals who aspire to make a real impact, both now and in the future. If this resonates with you, then you would be a valuable addition to our dynamic international team. We are currently seeking a Senior Software Engineer - Data Engineer (AI Solutions). In this role, you will have the opportunity to: - Design, build, and maintain data pipelines to cater to the requirements of various stakeholders, including software developers, data scientists, analysts, and business teams. - Ensure that the data pipelines are modular, resilient, and optimized for performance and low maintenance. - Collaborate with AI/ML teams to support training, inference, and monitoring needs through structured data delivery. - Implement ETL/ELT workflows for structured, semi-structured, and unstructured data using cloud-native tools. - Work with large-scale data lakes, streaming platforms, and batch processing systems to ingest and transform data. - Establish robust data validation, logging, and monitoring strategies to uphold data quality and lineage. - Optimize data infrastructure for scalability, cost-efficiency, and observability in cloud-based environments. - Ensure adherence to governance policies and data access controls across projects. To excel in this role, you should possess the following qualifications and skills: - A Bachelor's degree in Computer Science, Information Systems, or a related field. - Minimum of 4 years of experience in designing and deploying scalable data pipelines in cloud environments. - Proficiency in Python, SQL, and data manipulation tools and frameworks such as Apache Airflow, Spark, dbt, and Pandas. - Practical experience with data lakes, data warehouses (e.g., Redshift, Snowflake, BigQuery), and streaming platforms (e.g., Kafka, Kinesis). - Strong understanding of data modeling, schema design, and data transformation patterns. - Experience with AWS (Glue, S3, Redshift, Sagemaker) or Azure (Data Factory, Azure ML Studio, Azure Storage). - Familiarity with CI/CD for data pipelines and infrastructure-as-code (e.g., Terraform, CloudFormation). - Exposure to building data solutions that support AI/ML pipelines, including feature stores and real-time data ingestion. - Understanding of observability, data versioning, and pipeline testing tools. - Previous engagement with diverse stakeholders, data requirement gathering, and support for iterative development cycles. - Background or familiarity with the Power, Energy, or Electrification sector is advantageous. - Knowledge of security best practices and data compliance policies for enterprise-grade systems. This position is based in Bangalore, offering you the opportunity to collaborate with teams that impact entire cities, countries, and shape the future. Siemens is a global organization comprising over 312,000 individuals across more than 200 countries. We are committed to equality and encourage applications from diverse backgrounds that mirror the communities we serve. Employment decisions at Siemens are made based on qualifications, merit, and business requirements. Join us with your curiosity and creativity to help shape a better tomorrow. Learn more about Siemens careers at: www.siemens.com/careers Discover the Digital world of Siemens here: www.siemens.com/careers/digitalminds,

Posted 3 days ago

Apply

3.0 - 6.0 years

7 - 10 Lacs

Hyderabad

Remote

Job Type: C2H (Contract to Hire) As a Data Engineer, you will work in a diverse, innovative team, responsible for designing, building, and optimizing the data infrastructure and pipelines for our new healthcare company's data platform. You'll architect and construct our core data backbone on a modern cloud stack, enabling the entire organization to turn complex data into life-saving insights. In this role, you will have the opportunity to solve challenging technical problems, mentor team members, and collaborate with innovative people to build a scalable, reliable, and world-class data ecosystem from the ground up. Core Responsibilities: (essential job duties and responsibilities) Design, develop, and maintain data replication streams and data flows to bring data from various SAP and non-SAP sources into Snowflake. Implement curated datasets on a modern data warehouse and data hub Interface directly with business and systems subject matter experts to understand analytic needs and determine logical data model requirements Work closely with data architects and senior analysts to identify common data requirements and develop shared solutions Support data integration and data modelers engineers Support and maintain data warehouse, ETL, and analytic platforms Required Skills and Experiences: Data warehouse and ETL background Advanced SQL programming capabilities Background in preparing data for analysis and reporting Familiar with data governance principles and tools Success in a highly dynamic environment with ability to shift priorities with agility Ability to go from whiteboard discussion to code Willingness to explore and implement new ideas and technologies Ability to effectively communicate with technical and non-technical audiences Ability to work independently with minimal supervision Minimum Qualifications: 4+ years experience with SQL. Snowflake strongly preferred. 3+ years experience with SAP Datasphere. 2+ years experience working directly with subject matter experts in both business and technology domains 2+ years experience with ERP data - preferably SAP S4, MS Dynamics and or BPCS 1+ year of experience with Salesforce, Workday, Concur or any other Enterprise application Nice-to-have: Experience with Machine Learning tools and processes Hands-on experience with Python Experience with Infrastructure as Code (IaC) principles and tools (e.g., Terraform, CloudFormation). Education: Bachelors in Computer Science, Information Systems, Engineering, science discipline, or similar.

Posted 3 days ago

Apply

3.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a Lead Consultant specializing in AWS Rehost Migration, you will be responsible for leveraging your 8+ years of technical expertise to facilitate the seamless transition of IT infrastructure from On-Prem to any Cloud environment. Your role will involve creating landing zones and overseeing application migration processes. Your key responsibilities will include assessing the source architecture and aligning it with the relevant target architecture within the cloud ecosystem. You must possess a strong foundation in Linux or Windows-based systems administration, with a deep understanding of Storage, Security, and network protocols. Additionally, your proficiency in firewall rules, VPC setup, network routing, Identity and Access Management, and security implementation will be crucial. To excel in this role, you should have hands-on experience with CloudFormation, Terraform templates, or similar automation and scripting tools. Your expertise in implementing AWS services such as EC2, Autoscaling, ELB, EBS, EFS, S3, VPC, RDS, and Route53 will be essential for successful migrations. Furthermore, your familiarity with server migration tools like Platespin, Zerto, Cloud Endure, MGN, or similar platforms will be advantageous. You will also be required to identify application dependencies using discovery tools or automation scripts and define optimal move groups for migrations with minimal downtimes. Your effective communication skills, both verbal and written, will enable you to collaborate efficiently with internal and external stakeholders. By working closely with various teams, you will contribute to the overall success of IT infrastructure migrations and ensure a smooth transition to the cloud environment. If you are a seasoned professional with a passion for cloud technologies and a proven track record in IT infrastructure migration, we invite you to join our team as a Lead Consultant - AWS Rehost Migration.,

Posted 4 days ago

Apply

1.0 - 5.0 years

0 Lacs

kozhikode, kerala

On-site

As a skilled and motivated Software Engineer with proficiency in Java, Angular, and hands-on experience with AWS cloud services, you will be responsible for designing, developing, testing, and deploying scalable software solutions that power the products and services at Blackhawk Network. Collaborating with cross-functional teams, you will deliver high-quality code in a fast-paced environment. Your responsibilities will include designing, developing, and maintaining scalable backend and frontend applications using Java and JavaScript frameworks like Node.js, Angular, or similar. Leveraging AWS cloud services such as Lambda, EC2, S3, API Gateway, RDS, ECS, and CloudFormation, you will deliver resilient cloud-native solutions. Writing clean, testable, and maintainable code following modern software engineering practices is crucial. Additionally, active participation in Agile ceremonies, including sprint planning, daily standups, and retrospectives, is expected. Collaboration with product managers, designers, and engineering peers to define, develop, and deliver new features is essential. Monitoring application performance, troubleshooting issues, and driving optimizations to ensure high availability and responsiveness are key responsibilities. Engaging in a rotating support schedule (2-sprint rotation) and participating in on-call responsibilities will be part of your role. Utilizing observability and monitoring tools to ensure system reliability and proactive issue detection is necessary. Qualifications for this role include 1-2 years of professional software development experience, strong proficiency in Java and JavaScript frameworks, and hands-on experience deploying applications using AWS services in production environments. A solid understanding of RESTful API design, asynchronous data handling, and event-driven architecture is required. Familiarity with DevOps best practices, including version control using Git and automated deployments, is expected. Experience with observability tools for logging, monitoring, and alerting is a plus. Being a strategic thinker with strong problem-solving skills, a passion for continuous learning and improvement, and effective communication skills are essential. A collaborative mindset, the ability to work closely with cross-functional teams, and a Bachelor's degree in computer science, Engineering, or a related field are required. Advanced degrees are considered a plus. Finally, the ability to thrive in a dynamic, fast-paced environment and adapt to changing technologies and priorities is crucial for success in this role.,

Posted 4 days ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Systems Engineer specializing in Data DevOps/MLOps, you will play a crucial role in our team by leveraging your expertise in data engineering, automation for data pipelines, and operationalizing machine learning models. This position requires a collaborative professional who can design, deploy, and manage CI/CD pipelines for data integration and machine learning model deployment. You will be responsible for building and maintaining infrastructure for data processing and model training using cloud-native tools and services. Your role will involve automating processes for data validation, transformation, and workflow orchestration, ensuring seamless integration of ML models into production. You will work closely with data scientists, software engineers, and product teams to optimize performance and reliability of model serving and monitoring solutions. Managing data versioning, lineage tracking, and reproducibility for ML experiments will be part of your responsibilities. You will also identify opportunities to enhance scalability, streamline deployment processes, and improve infrastructure resilience. Implementing security measures to safeguard data integrity and ensure regulatory compliance will be crucial, along with diagnosing and resolving issues throughout the data and ML pipeline lifecycle. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field, along with 4+ years of experience in Data DevOps, MLOps, or similar roles. Proficiency in cloud platforms like Azure, AWS, or GCP is required, as well as competency in using Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible. Expertise in containerization and orchestration technologies like Docker and Kubernetes is essential, along with a background in data processing frameworks such as Apache Spark or Databricks. Skills in Python programming, including proficiency in data manipulation and ML libraries like Pandas, TensorFlow, and PyTorch, are necessary. Familiarity with CI/CD tools such as Jenkins, GitLab CI/CD, or GitHub Actions, as well as understanding version control tools like Git and MLOps platforms such as MLflow or Kubeflow, will be valuable. Knowledge of monitoring, logging, and alerting systems (e.g., Prometheus, Grafana), strong problem-solving skills, and the ability to contribute independently and within a team are also required. Excellent communication skills and attention to documentation are essential for success in this role. Nice-to-have qualifications include knowledge of DataOps practices and tools like Airflow or dbt, an understanding of data governance concepts and platforms like Collibra, and a background in Big Data technologies like Hadoop or Hive. Qualifications in cloud platforms or data engineering would be an added advantage.,

Posted 4 days ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

You are an experienced OpenText ECM Content Server Senior Consultant with 4 to 7 years of experience, who will be responsible for leading enterprise content management solutions. Your role involves designing integration architectures, developing secure applications and workflows, defining security measures, and leading proof-of-concept initiatives. You will also provide mentorship to development teams, collaborate with stakeholders, and coordinate with infrastructure teams for deployment and operations. Key responsibilities include leading architectural design and strategy for OpenText Content Server solutions, designing integration architectures connecting Content Server with external systems, developing secure and scalable applications and APIs, defining security measures and enterprise content governance, leading proof-of-concept initiatives and technical assessments, providing mentorship to development teams, and collaborating with stakeholders. Required technical skills include 4+ years of experience in OpenText Content Server and architecture, 2+ years of experience in Jscript, extensive experience with integrations, strong programming skills in Java and JavaScript, knowledge of enterprise integration patterns, API gateways, and message brokers, database expertise, familiarity with enterprise security frameworks, and experience with WebReports, LiveReports, GCI PowerTools, and Content Server modules. Preferred skills include experience with OpenText Extended ECM suite, frontend development with React, Angular, or Vue.js, DevOps practices, knowledge of compliance standards, and OpenText certifications. Professional traits required for the role include strong leadership, mentoring, and team guidance, excellent communication skills, strategic thinking, project management skills, and experience with vendor management and technical governance.,

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies