Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
13 - 17 Lacs
noida
Work from Office
Preventive Maintenance on storage Arrays, firmware updates, and perform health checks Continuously monitor storage performance and capacity utilization Optimize storage through capacity management strategies (eg data tiering, deduplication) Recommend and implement storage expansion strategies Update and upgrade of backup tools, applications and OSHandles critical backup issues Execution of backup related projects and initiatives Change Management (raising and closing RFCs)Vendor and OEM coordination Creation of backup related reports Incident and service request fulfillment Advance Windows Server management and troubleshooting skills Advance to expert knowledge of different backup tools and technology Work experience from the same backup technology Hands-on experience with Avamar, Networker, DD, and Druva Backup suite Publish weekly/monthly backup related reportsManages backup BAU operations Tools and Technology EMC Avamar and Networker EMC Data Domain and Recovery Point EMC Unity Storage Druva Cloud based backup for End points and VMs Certification : EMC Certified Storage Architect EMC Certified Storage Administrator Automation Tools and Technology Ansible Terraform Cloud Formation Windows Powershell Scripting Azure Automation Knowledge of Datacenter and Networking automation use cases
Posted 16 hours ago
5.0 - 8.0 years
13 - 17 Lacs
gurugram
Work from Office
Preventive Maintenance on storage Arrays, firmware updates, and perform health checks Continuously monitor storage performance and capacity utilization Optimize storage through capacity management strategies (eg data tiering, deduplication) Recommend and implement storage expansion strategies Update and upgrade of backup tools, applications and OSHandles critical backup issues Execution of backup related projects and initiatives Change Management (raising and closing RFCs)Vendor and OEM coordination Creation of backup related reports Incident and service request fulfillment Advance Windows Server management and troubleshooting skills Advance to expert knowledge of different backup tools and technology Work experience from the same backup technology Hands-on experience with Avamar, Networker, DD, and Druva Backup suite Publish weekly/monthly backup related reportsManages backup BAU operations Tools and Technology EMC Avamar and Networker EMC Data Domain and Recovery Point EMC Unity Storage Druva Cloud based backup for End points and VMs Certification : EMC Certified Storage Architect EMC Certified Storage Administrator Automation Tools and Technology Ansible Terraform Cloud Formation Windows Powershell Scripting Azure Automation Knowledge of Datacenter and Networking automation use cases
Posted 16 hours ago
5.0 - 8.0 years
13 - 17 Lacs
pune
Work from Office
Preventive Maintenance on storage Arrays, firmware updates, and perform health checks Continuously monitor storage performance and capacity utilization Optimize storage through capacity management strategies (eg data tiering, deduplication) Recommend and implement storage expansion strategies Update and upgrade of backup tools, applications and OSHandles critical backup issues Execution of backup related projects and initiatives Change Management (raising and closing RFCs)Vendor and OEM coordination Creation of backup related reports Incident and service request fulfillment Advance Windows Server management and troubleshooting skills Advance to expert knowledge of different backup tools and technology Work experience from the same backup technology Hands-on experience with Avamar, Networker, DD, and Druva Backup suite Publish weekly/monthly backup related reportsManages backup BAU operations Tools and Technology EMC Avamar and Networker EMC Data Domain and Recovery Point EMC Unity Storage Druva Cloud based backup for End points and VMs Certification : EMC Certified Storage Architect EMC Certified Storage Administrator Automation Tools and Technology Ansible Terraform Cloud Formation Windows Powershell Scripting Azure Automation Knowledge of Datacenter and Networking automation use cases
Posted 16 hours ago
5.0 - 8.0 years
13 - 17 Lacs
mumbai
Work from Office
Preventive Maintenance on storage Arrays, firmware updates, and perform health checks Continuously monitor storage performance and capacity utilization Optimize storage through capacity management strategies (eg data tiering, deduplication) Recommend and implement storage expansion strategies Update and upgrade of backup tools, applications and OSHandles critical backup issues Execution of backup related projects and initiatives Change Management (raising and closing RFCs)Vendor and OEM coordination Creation of backup related reports Incident and service request fulfillment Advance Windows Server management and troubleshooting skills Advance to expert knowledge of different backup tools and technology Work experience from the same backup technology Hands-on experience with Avamar, Networker, DD, and Druva Backup suite Publish weekly/monthly backup related reportsManages backup BAU operations Tools and Technology EMC Avamar and Networker EMC Data Domain and Recovery Point EMC Unity Storage Druva Cloud based backup for End points and VMs Certification : EMC Certified Storage Architect EMC Certified Storage Administrator Automation Tools and Technology Ansible Terraform Cloud Formation Windows Powershell Scripting Azure Automation Knowledge of Datacenter and Networking automation use cases
Posted 16 hours ago
9.0 - 14.0 years
15 - 30 Lacs
hyderabad, bangalore rural, bengaluru
Hybrid
Need Strong experience with AWS, Core Python programming, Security and API
Posted 2 days ago
2.0 - 6.0 years
5 - 10 Lacs
bengaluru
Hybrid
Designation : Software Engineer Job Brief We are looking for an experienced Software Engineer AWS Deployment to join our engineering team. The ideal candidate will have strong knowledge of CI/CD pipelines and hands-on experience in deployment using tools such as Kubernetes, AWS CloudFormation, and YAML-based configurations. This role also requires basic Python skills, familiarity with other AWS services and excellent communication skills to work effectively with cross-functional teams. Roles and Responsibilities Design, implement, and maintain CI/CD pipelines for application and infrastructure deployments. Deploy and manage applications on AWS using Kubernetes, CloudFormation templates and YAML configurations. Support and automate serverless deployments using AWS Lambda and other serverless services (SQS,SNS,S3,KMS,RDS etc). Develop and maintain Python scripts to automate deployment, monitoring, and operational tasks. Collaborate with development and DevOps teams to ensure smooth release processes. Troubleshoot deployment and environment issues, performing root cause analysis and implementing fixes. Monitor deployments and production environments using AWS CloudWatch and other observability tools. Document deployment processes, standards, and best practices. Clearly communicate deployment plans, risks, and issue resolutions to stakeholders. Requirements Graduated from Any engineering background such as BE/B.Tech/ME/M.Tech or BCA/MCA. 3–5 years of experience in deployment engineering, DevOps, or cloud infrastructure management. Strong knowledge of CI/CD tools and processes (e.g., Jenkins, GitLab CI/CD, AWS Code Pipeline). Hands-on experience with Kubernetes, CloudFormation, and YAML scripting. Good Python programming skills Practical experience in AWS Lambda and serverless application deployments. Familiarity with key AWS services such as Lambda, SQS,SNS, S3, CloudWatch, IAM, and RDS. Excellent problem-solving and communication skills.
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Join us as an AWS Cloud Engineer at Barclays, where you will play a crucial role in supporting the successful delivery of Location Strategy projects while adhering to planned budgets, quality standards, and governance protocols. As a key driver in the evolution of our digital landscape, you will lead the way in implementing cutting-edge technology to enhance our digital offerings and ensure exceptional customer experiences. To excel in this role, you should have expertise in architecting and managing AWS infrastructure, which includes working with EC2, ALB/NLB, VPC, Route Tables, NAT, Security groups, and Auto Scaling. Your responsibilities will also involve developing and maintaining Cloud Formation templates for infrastructure provisioning, managing AWS service catalog products, implementing Infrastructure as Code using Chef/Ansible, and handling Docker containers while orchestrating them with Kubernetes (EKS preferred). Furthermore, a strong understanding of CI/CD tools, Linux systems, SSL certificates, and security protocols is essential. Valued skills for this role may include holding AWS certifications like AWS Solution Architect or AWS DevOps Engineer, experience with Terraform or other IAC tools in addition to Cloud Formation, familiarity with observability tools such as Elasticsearch and Prometheus, proficiency in scripting with Python or similar languages, and exposure to Agile methodologies and DevOps culture. As an AWS Cloud Engineer at Barclays, your primary purpose will be to build and maintain infrastructure platforms and products that support applications and data systems. This involves leveraging hardware, software, networks, and cloud computing platforms to ensure the reliability, scalability, and security of the infrastructure. You will be accountable for various tasks, including engineering development, incident management, automation, security implementation, teamwork, and continuous learning to stay abreast of industry trends and foster technical excellence within the organization. If you are appointed as an Assistant Vice President, your expectations will involve advising and influencing decision-making, leading a team, and demonstrating leadership behaviours aligned with the LEAD framework - Listen and be authentic, Energise and inspire, Align across the enterprise, Develop others. You will be responsible for operational effectiveness, collaboration with other functions, and contributing to policy development. In conclusion, all colleagues at Barclays are expected to embody the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, as well as demonstrate the Barclays Mindset of Empower, Challenge, and Drive in their daily interactions and work.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
As an AI Engineer specializing in Large Language Models (LLMs) and Agentic Automation, you will be responsible for designing and deploying scalable AI pipelines, integrating AI-powered coding assistants, and ensuring secure, high-performance delivery of AI-driven applications in enterprise environments. Your role will involve collaborating cross-functionally, communicating effectively, and providing regular project updates while working in large-scale, security-focused environments. Key Responsibilities - Design, build, and deploy LLM-powered applications and AI agents into production, leveraging AI coding assistants and frameworks such as LangChain and CrewAI to accelerate development. - Build and manage scalable AI pipelines using AWS services like Lambda, IAM, and CloudFormation, ensuring clean, testable, and scalable Python code with best practices in debugging and problem-solving. - Implement and maintain CI/CD pipelines and Infrastructure as Code (IaC), preferably using CDK, while ensuring compliance and reliability in enterprise environments. - Take end-to-end ownership of tasks, proactively anticipating challenges, and delivering results within a cloud-native architecture. Required Skills - Strong experience with LLMs, prompt engineering, and AI agent deployment. - Hands-on expertise with AI-powered coding assistants, LLM frameworks (CrewAI, LangChain), and proficiency in AWS services (Lambda, IAM, CloudFormation). - Strong coding background, particularly in Python, with debugging and problem-solving skills. - Experience with CI/CD pipelines and cloud-native deployments in enterprise-grade security environments. Ideal Candidate Traits - Self-driven and proactive, taking ownership of end-to-end delivery. - Strong communicator capable of keeping stakeholders updated. - Forward-thinking individual who can anticipate, design, and solve problems beyond executing instructions. If you possess a passion for AI technologies, a drive for innovation, and the skills mentioned above, we encourage you to apply for this exciting opportunity as an AI Engineer specializing in LLMs and Agentic Automation.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Cloud Formation Template Developer, you will be responsible for designing, developing, and maintaining Cloud Formation templates using YAML. Your primary focus will be on implementing infrastructure automation solutions to efficiently provision and manage cloud resources. Collaboration with cross-functional teams to understand infrastructure requirements and translating them into code will be a key aspect of your role. Using Cloud Formation, you will provision and configure AWS resources to ensure scalability, security, and high availability. You will also be tasked with optimizing and enhancing existing Cloud Formation templates to improve resource utilization and performance. Utilizing version control systems such as Git to manage and track changes is essential, along with integrating Infrastructure as Code (IaC) workflows into CI/CD pipelines to automate deployment and testing of infrastructure changes. As part of your responsibilities, you will implement security best practices within Cloud Formation templates to uphold a secure cloud infrastructure. Ensuring compliance with organizational policies, industry standards, and regulatory requirements is crucial. Monitoring solutions for infrastructure deployed through Cloud Formation will be implemented, and you will troubleshoot and resolve any issues related to infrastructure deployment and configuration. Creating and maintaining comprehensive documentation for Cloud Formation templates and related processes is imperative. Additionally, providing training and support to team members on best practices for utilizing Cloud Formation will be part of your role. Requirements: - Bachelor's degree in Computer Science, Information Technology, or a related field. - Proven experience as a YAML Developer focusing on Cloud Formation. - Strong proficiency in YAML scripting and an understanding of infrastructure as code principles. - In-depth knowledge of AWS services and architecture. - Experience with version control systems, CI/CD pipelines, and automation tools. - Familiarity with security best practices and compliance standards in cloud environments. - Excellent problem-solving and communication skills. - AWS Certified DevOps Engineer or related certifications are a plus.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Cloud Infra DevOps Linux Engineer at our Digital Technology Team, you will play a crucial role in building innovative solutions to support the Baker Hughes Cloud strategy. Working alongside a collaborative team in an Agile/DevOps environment, you will be responsible for developing and deploying Enterprise Cloud infrastructure automation solutions with a focus on Linux image creation, security, patching, and administration. Your passion for automating everything and delivering exceptional customer experiences will drive you to excel in this role while ensuring compliance with security guidelines. To be successful in this position, you should possess a bachelor's Degree in Computer Science, Information Systems, or a related field, along with a strong knowledge of AWS & Azure and at least 5 years of relevant experience. Your expertise in enterprise-scale Linux systems administration, programming skills (particularly Python), and familiarity with IP networking concepts will be essential. Additionally, experience in infrastructure orchestration tools such as SSM, automation account, ansible, and a good understanding of CI/CD concepts will be beneficial for this role. At Baker Hughes, we value diversity and recognize that everyone has unique preferences when it comes to work patterns. Therefore, we offer flexible working arrangements, including the option to work flexible hours to accommodate individual productivity peaks. Our commitment to our employees" well-being extends beyond work hours, with comprehensive benefits such as private medical care, life insurance, and financial programs designed to support our workforce. As an energy technology company operating in over 120 countries, Baker Hughes is dedicated to driving innovation and progress in the energy sector. If you are looking for an opportunity to contribute to meaningful advancements in a supportive and dynamic environment, we invite you to join our team of forward-thinkers who are dedicated to taking energy forward. Let's work together to shape a safer, cleaner, and more efficient future for people and the planet.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Linux Administrator at Fulcrum Digital, you will be responsible for managing Linux operating systems such as Red Hat, CentOS, and Ubuntu along with associated utilities. Your expertise in networking concepts and protocols like TCP/IP, DNS, and DHCP will be crucial in ensuring smooth operations. You will implement security best practices, system hardening, and patching while troubleshooting any issues that arise in the Linux environment and network. Your role will involve working with automation tools like Salt and Ansible to streamline processes and enhance efficiency. Experience with virtualization technologies such as VMware, KVM, and cloud platforms like AWS will be advantageous as you manage a large number of VMs. Additionally, your knowledge of disaster recovery management and continuous improvement practices will contribute to the overall stability and resilience of the systems. To excel in this role, you must have strong problem-solving skills, attention to detail, and the ability to work both independently and collaboratively. Effective communication and interpersonal skills will be essential for interacting with team members and stakeholders. Proficiency in tools like Prometheus, Grafana, Splunk, GIT, Jenkins, Bitbucket/Gitlab, and configuration management tools like Saltstack or Ansible will be required. Scripting skills in Bash, Python, YAML, and Groovy will also be beneficial. If you are based in Pune, India, and possess the necessary skills and experience in Linux administration, networking, virtualization, security, and scripting, we encourage you to apply for this role at Fulcrum Digital. Join us in driving digital transformation and technology services across various industries for a rewarding and challenging career opportunity.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
Are you ready to work at a fast-growing company where you can make a difference Boomi aims to make the world a better place by connecting everyone to everything, anywhere. Our award-winning, intelligent integration and automation platform helps organizations power the future of business. At Boomi, you'll work with world-class people and industry-leading technology. We hire trailblazers with an entrepreneurial spirit who can solve challenging problems, make a real impact, and want to be part of building something big. If this sounds like a good fit for you, check out boomi.com or visit our Boomi Careers page to learn more. We are looking for a Principal Software Engineer (AI/ML) who is passionate about analytics and knowledgeable about the supporting platforms enabling analytic solutions. We need backend software engineers with strong technical expertise in AI/ML technologies to be part of a fast-growing, driven team playing a key role in Boomis future initiatives. Successful candidates will be expected to thrive in challenging environments and have a track record of delivering on commitments within a fast-paced Agile environment. Our team at Boomi is collaborative, self-starters with technical curiosity, and eager to help each other succeed. We value sharing new ideas and improvements, building and deploying smart services core to our platform, and taking pride in our work. If you are a team player with technical curiosity and a willingness to learn, you may be a good fit for our team. Responsibilities: - Collaborate throughout the software development lifecycle as a key member of the Agile team. - Guide and participate in the design, development, unit testing, and deployment of Boomi products and services. - Work independently with minimal guidance, delivering on commitments and meeting deadlines for complex initiatives. - Collaborate on organization-wide initiatives with other Agile development teams. - Research, validate, and recommend technologies for delivering robust, secure, and scalable services and architectures. - Set up, develop, and maintain best practices for the team, including thorough code reviews. - Provide architecture and detailed design documents for grooming and planning. - Investigate and resolve complex customer issues and mentor team members. - Keep up with the latest developments in the field through continuous learning and collaboration with data scientists, data engineers, and front-end engineers. Requirements: - Bachelors degree in computer science or a related field. - 8+ years of development experience with Java, Python. - Proficiency in SQL and Database technologies. - Knowledgeable about Generative AI technologies, LLMs, AI Agents. - Strong prior experience working with AWS services. - Experience developing microservices applications and deploying them using Docker and Kubernetes. - Strong problem-solving skills with an emphasis on product development. - Strong analytical, written, and verbal communication skills. - Experience with developing highly scalable, high throughput web applications and backend systems. - Experience using Linux/Unix environments. - Knowledge of Infrastructure provisioning tools like Terraform, Cloud Formation, Ansible. Be Bold. Be You. Be Boomi. We take pride in our culture and core values and are committed to being a place where everyone can be their true, authentic self. Our team members are our most valuable resources, and we look for and encourage diversity in backgrounds, thoughts, life experiences, knowledge, and capabilities. All employment decisions at Boomi are based on business needs, job requirements, and individual qualifications. Boomi strives to create an inclusive and accessible environment for candidates and employees. If you need accommodation during the application or interview process, please submit a request to talent@boomi.com. This inbox is strictly for accommodations; please do not send resumes or general inquiries.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
We are looking for a skilled and proactive DevOps Engineer to join our team. The ideal candidate should possess a strong foundation in cloud-native solutions and microservices, focusing on automation, infrastructure as code (IaC), and continuous improvement. As a DevOps Engineer, you will be responsible for managing infrastructure architecture and non-functional requirements across various projects, ensuring high standards in security, operational efficiency, and cost management. Your key responsibilities will include owning infrastructure architecture and non-functional requirements for a set of projects, designing and integrating cloud architecture with cloud-native solutions, implementing common infrastructure best practices emphasizing security, operational efficiency, and cost efficiency, demonstrating strong knowledge of microservices based architecture, and understanding Kubernetes and Docker. You should also have experience in SecOps practices, developing CI/CD pipelines for faster build with quality and security automation, enabling observability within the platform, and building & deploying cloud IAC in AWS using tools like CrossPlane, Terraform, or Cloud Formation. To be successful in this role, you should have at least 2 years of hands-on experience in DevOps and possess technical skills such as proficiency in containerization (Docker) and container orchestration (Kubernetes), strong scripting and automation abilities, familiarity with DevOps tools and CI/CD processes, especially in Agile environments. Extensive hands-on experience with AWS, including deploying and managing infrastructure through IaC (Terraform, CloudFormation), and proficiency in configuration management tools like Ansible and Kustomize are essential. Additional skills like knowledge of microservices architecture, observability best practices, and SecOps tools will be beneficial. Your primary skills should include a good understanding of scripting and automation, exposure to Linux and Windows-based environments, experience in DevOps engineering with automation using tools like Ansible and Kustomize, familiarity with CI/CD processes and Agile development, and proficiency in toolchains like Containerization, Container Orchestrations, CI/CD, and SecOps. Communication skills, teamwork, attention to detail, problem-solving abilities, learning agility, and effective prioritization are key attributes we are looking for in a candidate. If you possess excellent communication skills, are self-motivated, proactive, quick to learn new technologies, and can effectively manage multiple projects within deadlines, you are the right fit for this role. Join our team and contribute to our mission of operational excellence, customer-centricity, and continuous learning in a collaborative environment. (ref:hirist.tech),
Posted 1 week ago
5.0 - 10.0 years
15 - 19 Lacs
gurugram
Work from Office
Strong skills in Java 8+, Web application frameworks such as Spring Boot, and RESTful API development. Familiarity with AWS Toolsets, including but not limited to SQS, Lambda, DynamoDB, RDS, S3, Kinesis, Cloud formation Demonstrated experience in designing, building, and documenting customer facing RESTful APIs Demonstrable ability to read high-level business requirements and drive clarifying questions. Demonstrable ability to engage in self-paced continuous learning to upskill, with the collaboration of engineering leaders. Demonstrable ability to manage your own time and prioritize how you spend your time most effectively. Strong skills with the full lifecycle of development, from analysis to install into production
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
maharashtra
On-site
As a Data Engineer with 7-10 years of experience, you will be responsible for architecting, creating, and maintaining data pipelines and ETL processes in AWS. Your role will involve supporting and optimizing the current desktop data tool set and Excel analysis pipeline to a transformative Cloud-based highly scalable architecture. You will work in an agile environment within a collaborative agile cross-functional product team using Scrum and Kanban methodologies. Collaboration is key in this role, as you will work closely with data science teams and business analysts to refine data requirements for various initiatives and data consumption needs. Additionally, you will be required to educate and train colleagues such as data scientists, analysts, and stakeholders in data pipelining and preparation techniques to facilitate easier integration and consumption of data for their use cases. Your expertise in programming languages like Python, Spark, and SQL will be essential, along with prior experience in AWS services such as AWS Lambda, Glue, Step function, Cloud Formation, and CDK. Knowledge of building bespoke ETL solutions, data modeling, and T-SQL for managing business data and reporting is also crucial for this role. You should be capable of conducting technical deep-dives into code and architecture and have the ability to design, build, and manage data pipelines encompassing data transformation, data models, schemas, metadata, and workload management. Furthermore, your role will involve working with data science teams to refine and optimize data science and machine learning models and algorithms. Effective communication skills are essential to collaborate effectively across departments and ensure compliance and governance during data use. In this role, you will be expected to work within and promote a DevOps culture and Continuous Delivery process to enhance efficiency and productivity. This position offers the opportunity to be part of a dynamic team that aims to drive positive change through technology and innovation. Please note that this role is based in Mumbai, with the flexibility to work remotely from anywhere in India.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a qualified candidate for this position, you should have experience in at least one high-level programming language such as Python, Ruby, or GoLang, and possess a strong understanding of Object-Oriented Programming concepts. You must be proficient in designing, deploying, and managing distributed systems as well as service-oriented architectures. Your responsibilities will include designing and implementing Continuous Integration, Continuous Deployment, and Continuous Testing pipelines using tools like Jenkins, Bamboo, Azure DevOps, AWS CodePipeline, and various other DevOps tools such as Jenkins, Sonar, Maven, Git, Nexus, and UCD. You should also have experience in deploying, managing, and monitoring applications and services on both Cloud and on-premises infrastructure like AWS, Azure, OpenStack, Cloud Foundry, OpenShift, etc. Additionally, you must be proficient in using Infrastructure as Code tools like Terraform, CloudFormation, Azure ARM, etc. Your role will involve developing, managing, and monitoring tools, as well as log analysis tools to handle operations efficiently. Knowledge of tools like AppDynamics, Datadog, Splunk, Kibana, Prometheus, Grafana, Elasticsearch, etc., will be beneficial. You should demonstrate the ability to maintain enterprise-scale production software and possess knowledge of diverse system landscapes such as Linux and Windows. Expertise in analyzing and troubleshooting large-scale distributed systems and Microservices is essential. Experience with Unix/Linux operating systems internals and administration, including file systems, inodes, system calls, networking, TCP/IP routing, and network topologies, is crucial. Preferred skills for this role include expertise in Continuous Integration within the Mainframe environment and Continuous Testing practices.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Flexing It is a freelance consulting marketplace that connects freelancers and independent consultants with organizations seeking independent talent. Our client, a global leader in energy management and automation, is currently seeking a Data Engineer to prepare data and make it available in an efficient and optimized format for various data consumers, including BI, analytics, and data science applications. As a Data Engineer, you will work with current technologies such as Apache Spark, Lambda & Step Functions, Glue Data Catalog, and RedShift on the AWS environment. Key Responsibilities: - Design and develop new data ingestion patterns into IntelDS Raw and/or Unified data layers based on the requirements and needs for connecting new data sources or building new data objects. Automate data pipelines to streamline the process. - Implement DevSecOps practices by automating the integration and delivery of data pipelines in a cloud environment. Design and implement end-to-end data integration tests and CICD pipelines. - Analyze existing data models, identify performance optimizations for data ingestion and consumption to accelerate data availability within the platform and for consumer applications. - Support client applications in connecting and consuming data from the platform, ensuring compliance with guidelines and best practices. - Monitor the platform, debug detected issues and bugs, and provide necessary support. Skills required: - Minimum of 3 years of prior experience as a Data Engineer with expertise in Big Data and Data Lakes in a cloud environment. - Bachelor's or Master's degree in computer science, applied mathematics, or equivalent. - Proficiency in data pipelines, ETL, and BI, regardless of the technology. - Hands-on experience with AWS services including at least 3 of: RedShift, S3, EMR, Cloud Formation, DynamoDB, RDS, Lambda. - Familiarity with Big Data technologies and distributed systems such as Spark, Presto, or Hive. - Proficiency in Python for scripting and object-oriented programming. - Fluency in SQL for data warehousing, with experience in RedShift being a plus. - Strong understanding of data warehousing and data modeling concepts. - Familiarity with GIT, Linux, CI/CD pipelines is advantageous. - Strong systems/process orientation with analytical thinking, organizational skills, and problem-solving abilities. - Ability to self-manage, prioritize tasks in a demanding environment. - Consultancy orientation and experience with the ability to form collaborative working relationships across diverse teams and cultures. - Willingness and ability to train and teach others. - Proficiency in facilitating meetings and following up with action items.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
punjab
On-site
As a Java/PING/IAM Support Analyst based in Sydney, you will be expected to possess the following skills and experiences: - Demonstrated understanding of Core Java and J2EE server technologies. - Experience with Ping/IAM, SAML, and OAuth 2.0 protocols. - Proficiency in tools such as Github, Bitbucket, Jenkins, CodeCommit, Cloud formation, Datadog, and AWS environments. - Effective communication skills to interact with team members and stakeholders. Desirable qualifications for this role include: - Prior experience working in an Agile environment. - Knowledge of the Retail Banking Domain. - Familiarity with service management and support tools like ServiceNow and HP Fortify, providing an additional advantage. If you meet the mandatory requirements and possess some or all of the desirable qualifications, we encourage you to apply for this position and contribute to our team's success.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
As a candidate for the DevOps position, you should have a strong background in Linux/Unix Administration and experience with automation/configuration management tools like Jenkins, Puppet, Chef, or their equivalents. It is essential to be proficient in a variety of open-source technologies and cloud services, with specific experience in AWS, VMware, Azure, and GCP. Your role will involve working with SQL and MySQL databases, and familiarity with NoSQL databases like Redis is a plus. You should have a working understanding of coding and scripting languages such as PHP, Python, Perl, or Ruby. Knowledge of best practices in IT operations for maintaining an always-up, always-available service is crucial. In this position, you will be responsible for implementing integrations requested by customers, deploying updates and fixes, providing Level 2 technical support, and building tools to enhance customer experience by reducing errors. Root cause analysis for production errors, resolving technical issues, and developing scripts for automation are also key responsibilities. Experience with CI/CD tools like Ansible, Jenkins, Git, and Terraform, as well as cloud formation, is required. You should have a good understanding of IaaS, SaaS, and PaaS, along with experience in NFV technologies. Knowledge of OWASP, Threat Modelling Methodologies (STRIDE, PASTA, NIST, SAST), and handling security analysis tools for infrastructure improvements is important. Hands-on experience with Docker and Kubernetes is necessary, including creating secure container images, implementing container network security, and automating security testing. Familiarity with container security tools for scanning containers, registries, and runtime monitoring is expected. In terms of skills and education, a Bachelor's Degree or MS in Engineering is preferred, along with experience in managing Linux-based infrastructure and proficiency in at least one scripting language. Hands-on experience with databases like PSQL, MySQL, Mongo, and Elasticsearch, as well as knowledge of Java/JVM based languages, are beneficial. Critical thinking, problem-solving skills, a sense of ownership, and teamwork are essential qualities for this role. Good time-management, interpersonal, and communication skills are also required. Mandatory qualifications include AWS Certification, hands-on experience with VMware, Azure, and GCP cloud services. The job type is full-time, with benefits such as health insurance and performance bonuses. The work schedule is during the day shift. If you are currently serving a notice period, please specify your last working day. Experience of at least 2 years in DevOps, Kubernetes, Helm Chart, and Terraform is required for this position, with the work location being in person.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
The Cloud/DevOps Architect position based in Pune is a full-time role suitable for a Senior professional with 5+ years of experience in DevOps and Cloud Architecture. As a Cloud/DevOps Architect, you will be responsible for leading the design and implementation of scalable, secure, and fault-tolerant cloud infrastructure solutions. Your role involves devising strategies pertaining to incident response, fault tolerance, and zero-trust security measures. Your key responsibilities will include leading and implementing technical DevOps solutions to enhance development and deployment workflows, designing comprehensive incident response plans for swift issue resolution, creating robust fault tolerance architectures to ensure system availability and data integrity, managing Infrastructure as Code (IaC) using tools like Terraform, CloudFormation, or similar, designing efficient load balancing strategies, architecting multi-region CI/CD pipelines for high availability, and enforcing zero-trust security principles. To excel in this role, you should possess proven experience in Cloud Architecture or DevOps Engineering, expertise in AWS, Azure, or Google Cloud Platform, proficiency in IaC tools like Terraform, Ansible, or Cloud Formation, a strong grasp of CI/CD tools such as Jenkins or GitLab CI, knowledge of SRE practices, monitoring, and alerting systems, understanding of networking, load balancing, and distributed systems, and experience in implementing zero-trust architectures and identity access management (IAM). Preferred qualifications include relevant certifications like AWS Certified Solutions Architect or Azure Solutions Architect Expert, experience with Kubernetes and container orchestration, and familiarity with security compliance standards such as SOC 2 or ISO 27001.,
Posted 2 weeks ago
3.0 - 5.0 years
8 - 10 Lacs
hyderabad
Work from Office
Strong skills in Java 8+, Web application frameworks such as Spring Boot, and RESTful API development. Familiarity with AWS Toolsets, including but not limited to SQS, Lambda, DynamoDB, RDS, S3, Kinesis, Cloud formation Demonstrated experience in designing, building, and documenting customer facing RESTful APIs Demonstrable ability to read high-level business requirements and drive clarifying questions. Demonstrable ability to engage in self-paced continuous learning to upskill, with the collaboration of engineering leaders. Demonstrable ability to manage your own time and prioritize how you spend your time most effectively. Strong skills with the full lifecycle of development, from analysis to install into production
Posted 2 weeks ago
5.0 - 10.0 years
8 - 15 Lacs
hyderabad
Work from Office
Strong skills in Java 8+, Web application frameworks such as Spring Boot, and RESTful API development. Familiarity with AWS Toolsets, including but not limited to SQS, Lambda, DynamoDB, RDS, S3, Kinesis, Cloud formation Demonstrated experience in designing, building, and documenting customer facing RESTful APIs Demonstrable ability to read high-level business requirements and drive clarifying questions. Demonstrable ability to engage in self-paced continuous learning to upskill, with the collaboration of engineering leaders. Demonstrable ability to manage your own time and prioritize how you spend your time most effectively. Strong skills with the full lifecycle of development, from analysis to install into production.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
We are looking for a Customer Onboarding Manager to be a part of our Technology Organization. As a member of the Delivery Management team, you will collaborate with our Enterprise customers to strategize and set up our products according to their specific needs. Teaming up with a Delivery Architect, you play a vital role in ensuring a swift and efficient onboarding process for the customers. To excel in this position, you should possess project management expertise, a forward-thinking approach towards technology, and a problem-solving mindset. Your responsibilities will include guiding our Enterprise customers through the onboarding procedures for ASAPP's products. You will be in charge of coordinating and overseeing all tasks, which involves risk assessment and mitigation, defining milestones, testing schedules, and final acceptance. It will also be your duty to document and enhance product requirements for effective communication both internally and externally. Collaborating with our Delivery Architecture and Engineering teams, you will configure ASAPP's products. Furthermore, you will actively participate in multiple customer project teams, working closely with Delivery Architects and Engineers to complete the designated tasks. You are expected to become an expert on ASAPP's Products, providing insights internally and externally. Additionally, you will collaborate with our Go to Market, Product, Engineering, and Research teams to achieve our Customers" objectives. To qualify for this role, you should have a minimum of 5 years of relevant experience, with at least 3 years of project management, implementation, or customer success experience. You must be familiar with cloud-based software implementation processes and possess the ability to navigate complex interpersonal relationships while engaging with various stakeholders from different levels within our customers" organizations. A proactive attitude, coupled with the ability to identify and resolve issues, is essential. Being adaptable and open to feedback is crucial, along with strong presentation skills to communicate business-oriented solutions and technical concepts effectively. Experience in creating, documenting, and enhancing technical processes is also required. Desirable qualifications include advanced experience in cloud technologies such as AWS, Docker/Kubernetes, Cloud Formation, Terraform, EC2, IAM, and S3. Experience in the contact center and customer experience industry, direct collaboration with technical partners, proficiency in major programming languages (Golang, Python, C++, Java, etc.), working in multi-developer/engineer environments using tools like Git, Jira, and CI/CD pipelines, and familiarity with Machine Learning, Natural Language Processing, and LLMs. In addition to a challenging role, we offer competitive compensation, stock options, Life Insurance, complimentary onsite meals, a connectivity stipend for mobile phone and internet services, wellness benefits, Mac equipment, learning and development support, and parental leave, including 6 weeks of paternity leave.,
Posted 2 weeks ago
4.0 - 9.0 years
8 - 16 Lacs
pune
Remote
Enhance/modify applications, configure existing systems and provide user support DotNET Full Stack or [Angular 18 + developer + DotNET backend] SQL Server Angular version 18+ (it is nice to have) Angular version 15+ (mandatory)
Posted 2 weeks ago
8.0 - 13.0 years
18 - 25 Lacs
hyderabad
Work from Office
Seeking Java Developer with strong expertise in AWS cloud services. The ideal candidate has 8+ years of experience developing scalable, high-performance web applications using Java (up to Java 17), Spring Boot, Angular (2-12), and AWS. Responsible for designing, developing, and deploying robust cloud-native applications, leveraging AWS services such as Lambda, EC2, S3, RDS, DynamoDB, and API Gateway. Key Responsibilities: Design and develop full-stack Java applications using Spring Boot, Angular, and RESTful APIs. Build and maintain AWS-based cloud solutions, leveraging EC2, S3, Lambda, DynamoDB, API Gateway, and Cloud Formation. Develop and optimize microservices architectures, ensuring high availability and scalability. Implement CI/CD pipelines using Jenkins, AWS Code Pipeline, and Terraform for seamless deployments. Work with Docker & Kubernetes (EKS) for containerized applications. Optimize system performance using monitoring tools like AWS Cloud Watch, X-Ray, and ELK Stack. Utilize Apache Kafka for event-driven architectures and Redis for caching. Ensure robust testing and quality assurance with JUnit, Mockito, Jasmine, and Postman. Collaborate in an Agile/Scrum environment to drive innovation and efficiency.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |