Home
Jobs

160 Bamboo Jobs - Page 5

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3 - 6 years

3 - 7 Lacs

Pune

Work from Office

Naukri logo

Job TitleConsultant - Automation DeveloperLocationPune (Hybrid - 3days in a week at office, 2 days wfh, Candidate needs to report to onlyPune office) (Relocation is considerable)Shift Timings12:30 PM - 9:30 PM ISTBudgetLevel 3 (7 to 10 yrs) 28 LPALevel 4 (10+ to 12+ yrs) 31 LPALevel 5 (13 to 15+ yrs)- 36 LPAInterview2 rounds (HMs availability is between 3PM 5PM IST)Positions4Considerable Notice Period - 30 daysConsiderable Gap - Upto 6 Months1. Project Overview:Designing, developing, implementing, and maintaining core infrastructure automation componentsusing software engineering principles and methodology. Develop automation for differentinfrastructure technologies and services (Network, Storage, Middleware, Database etc.) usingclients approved automation tools and technologies. The ideal candidate would have excellentsoftware development skills and DevOps or automation experience within a medium to largeenterprise environment. 2. Role:Develop automation for different infrastructure technologies and services (Network, Storage,Middleware, Database etc.) using clients approved automation tools and technologies. This is afast-paced team, and the successful candidate must be able to act with a sense of urgency and beable to adapt to changing priorities. Qualified candidates must possess a keen sense of leadership,strong analytical skills, excellent communication skills, good technical abilities, and a can-doattitude.3. Experience LevelLevel 34. QualificationsBSc IT/BE any stream.5. Must Have Skills Overall - 7+ Years Relevant - 5+ Years 5+ years of Automation development experience using Ansible, Terraform, Chef, Python, ShellScripts etc as the automation tool. Develop automation for different infrastructure technologies and services (Network, Storage,Middleware, Database etc.) using clients approved automation tools and technologies. ITSM process experience utilizing ticketing tools like Remedy or Service-Now. At least 3 years in supporting BFSI Organization/Client6. Nice to HavesExperience with developing and automating CI/CD pipelines using GitHub Actions and Terraform. Experience in identifying drift patterns and implementing remediation workflows and experienceon CMDB. Knowledge in designing drift/configuration detection and management solution using Ansible,Chef, Terraform etc. Experience installing and administering applications in Unix, Linux, and Windows environmentsincluding debugging and command line Familiarity with IaaS (Infrastructure as a Service) or PaaS (Platform as a Service) Experience in implementing automated solutions for disaster recovery of web application stack. Familiarity with development tools, e.g., JIRA, BitBucket, Bamboo, Maven. Familiarity with Service-oriented Architecture (SOA), web services, SOAP, and JSON. Familiarity with REST API integration Familiarity with containerization concepts, Docker, Kubernetes a plus. 7. Tasks & responsibilitiesDesigning, developing, implementing, and maintaining core infrastructure automationcomponents using software engineering principles and methodology. Develop automation for different infrastructure technologies and services (Network, Storage,Middleware, Database etc.) using clients approved automation tools and technologies. Writing components that integrate with different Infrastructure components that constituteclients private and public cloud implementations using the interfaces exposed by thosecomponents (REST, Python etc.) The ideal candidate would have excellent software development skills and DevOps or automationexperience within a medium to large enterprise environment. This is a fast-paced team, and the successful candidate must be able to act with a sense of urgencyand be able to adapt to changing priorities. Qualified candidates must possess a keen sense ofleadership, strong analytical skills, excellent communication skills, good technical abilities, and acan-do attitude. Should be able to work effectively in onshore-offshore model using Agile software developmentmethodologies.

Posted 2 months ago

Apply

12 - 16 years

35 - 37 Lacs

Ahmedabad, Noida, Mumbai (All Areas)

Work from Office

Naukri logo

Dear Candidate, We are seeking a skilled Full Stack Developer to join our development team. The ideal candidate will be responsible for designing, developing, and maintaining both the front-end and back-end of web applications. You will work closely with UX/UI designers, back-end engineers, and other team members to deliver dynamic and scalable solutions across the entire development stack. Role & Responsibilities: Full Stack Development : Design, develop, and maintain both the front-end and back-end components of web applications, ensuring seamless integration. Front-End Development : Build responsive and user-friendly interfaces using modern JavaScript frameworks like React , Angular , or Vue.js . Back-End Development : Develop and maintain server-side logic, APIs, and databases using technologies like Node.js , Java , Python , or Ruby . Database Management : Work with relational (e.g., PostgreSQL , MySQL ) and NoSQL (e.g., MongoDB ) databases to manage and store data efficiently. RESTful APIs : Design and develop RESTful APIs to enable communication between the front-end and back-end systems. Performance Optimization : Ensure the application is optimized for maximum speed and scalability, while adhering to best practices for security. Code Quality : Write clean, maintainable, and well-documented code, ensuring adherence to coding standards and best practices. Required Skills & Qualifications: Front-End Development : Strong proficiency in JavaScript , and experience with front-end frameworks such as React , Angular , or Vue.js . Back-End Development : Experience with server-side technologies like Node.js , Java , Python , or Ruby . Database Knowledge : Familiarity with both SQL (PostgreSQL, MySQL) and NoSQL (MongoDB) databases. API Development : Experience designing and developing RESTful APIs to facilitate communication between front-end and back-end components. Version Control : Proficiency with Git and GitHub for code versioning and collaboration. Responsive Design : Knowledge of responsive design principles and experience with CSS frameworks like Bootstrap or Tailwind CSS . Cloud Platforms : Familiarity with cloud platforms like AWS , Azure , or GCP for application hosting and deployment. Project Management : Ability to manage multiple tasks and meet deadlines in a fast-paced development environment. Soft Skills: Strong problem-solving and analytical skills. Excellent communication skills to work with cross-functional teams. Ability to work independently and as part of a team. Detail-oriented with a focus on delivering high-quality solutions Note: If you are interested, please share your updated resume and suggest the best number & time to connect with you. If your resume is shortlisted, one of the HR from my team will contact you as soon as possible. Srinivasa Reddy Kandi Delivery Manager Integra Technologies

Posted 2 months ago

Apply

3 - 8 years

14 - 19 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title Site Reliability Engineer/SRE Responsibilities Experience in one or more high level programming language like Python or Ruby or GoLang and familiar with Object Oriented Programming. Proficient with designing, deploying and managing distributed systems and service-oriented architectures Design and implement the CI/CD/CT pipeline on one or more tool stack, like Jenkins, Bamboo, azure DevOps, and AWS Code pipeline with hands on experience in common DevOps tools (Jenkins, Sonar, Maven, Git, Nexus, and UCD etc.) Experience in deploying, managing and monitoring applications and services on one or more Cloud and on-premises infrastructure like AWS, Azure, OpenStack, Cloud Foundry, Open shift etc. Proficiency in one or more Infrastructure as code tools (e.g. Terraform, Cloud Formation, Azure ARM etc) Developing, managing monitoring tools and log analysis tools to manage operations with exposure to tools such as App Dynamics, Data Dog, Splunk, Kibana, Prometheus, Grafana Elasticsearch etc. Proven ability to maintain enterprise-scale production software with the knowledge of heterogeneous system landscapes (e.g. Linux, Windows) Expertise in analyzing and troubleshooting large-scale distributed systems and Micro Services with experience with Unix/Linux operating systems internals and administration (e.g., file systems, inodes, system calls) and networking (e.g., TCP/IP, routing, network topologies). Preferred Skills: Technology->DevOps->Continuous integration - Mainframe Technology->DevOps->Continuous Testing Educational Requirements Bachelor of Engineering Service Line Quality * Location of posting is subject to business requirements

Posted 2 months ago

Apply

4 - 6 years

16 - 17 Lacs

Greater Noida, Bengaluru

Work from Office

Naukri logo

Be a part of Niagara Cloud Suites world-class team of software engineers as we advance Tridium s position as a market leader in open systems and software. Participate in the design and implementation of the Niagara Cloud Suite. Execute full lifecycle software development. Write well designed, testable, high quality, efficient cod. Operate in an Agile development environment while collaborating with key stakeholders. Collaborate with a globally distributed engineering team. You must have: Bachelor s degree in computer science, Computer Engineering, or a software related discipline 4 - 6 years of experience as a professional software engineer 4 Years of Java and Cloud development experience We value the following qualifications: Experience with Cloud Development and Agile software development methodologies Core Java (JDK 21, Oops, Multi-threading, Collections) Cloud (MS Azure services, RDBMS, TSDB, Spring boot, JPA, Criteria builder, Design patterns, Docker, JWT, Kubernetes) CICD Bamboo, GitHub Familiarity with cloud providers like AWS, Azure, or GCP and their Kubernetes offerings Security -OAuth Experience with Test Driven Design Familiarity with cloud providers like AWS, Azure, or GCP and their Kubernetes offerings Excellent analytical and problem-solving skills including the ability to identify, formulate, and solve engineering problems Experience working in framework development

Posted 2 months ago

Apply

6 - 8 years

25 - 30 Lacs

Greater Noida, Bengaluru

Work from Office

Naukri logo

Be a part of Tridium s world class team of software engineers as we advance Tridium s position as a market leader in open systems and software Participate in the design and implementation of Tridium s next generation Niagara software technology Execute full lifecycle software development Write well designed, testable, high quality, efficient code Operate in an Agile development environment while collaborating with key stakeholders Collaborate with a globally distributed engineering team You must have: Bachelor s degree in computer science, Computer Engineering, or a software related discipline 6 -8 years of experience as a professional software engineer 5 Years of DevOps and Java development experience We value the following qualifications: Bachelor s Degree/ master s degree in a related field Experience with DevOPS and Agile software development methodologies. Java, Gradle, Python, Kotlin, Bamboo, GitHub Actions. Good knowledge in Power Shell Scription and Plugin developments and C/C++ Core Java & Java Fundamentals (Java Classes, Data structure, algorithm, Packages and Methods, Java Collections framework, Exception handling, Logging, JDBC, I/O Package, Multithreading.) Experience in DevOPs development, Strong Design and Architecture fundamentals a must-have. Experience in RESTful webservices, Spring Boot framework (Optional). Design patterns and problem-solving skills. Basic Java Script knowledge using Field Editors. Experience in RESTful webservices, Spring Boot framework (Optional). Design patterns and problem-solving skills. SQL/MySQL/PostgreSQL Database is preferable, Good to have knowledge on Orion Database. Good to have knowledge on Java 8 or Higher versions. Experience with Test Driven Design Cloud experience, prefer Azure but AWS is a plus. Experience in a multithreading technical environment and understanding of asynchronous programming techniques C/C++ Excellent analytical and problem-solving skills including the ability to identify, formulate, and solve engineering problems Experience working in framework development

Posted 2 months ago

Apply

2 - 6 years

12 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Design, construct, install, test, and maintain highly scalable data management systems using big data technologies such as Apache Spark (with focus on Spark SQL) and Hive. Manage and optimize our data warehousing solutions, with a strong emphasis on SQL performance tuning. Implement ETL/ELT processes using tools like Talend or custom scripts, ensuring efficient data flow and transformation across our systems. Utilize AWS services including S3, EC2, and EMR to build and manage scalable, secure, and reliable cloud-based solutions. Develop and deploy scripts in Linux environments, demonstrating proficiency in shell scripting. Utilize scheduling tools such as Airflow or Control-M to automate data processes and workflows. Implement and maintain metadata-driven frameworks, promoting reusability, efficiency, and data governance. Collaborate closely with DevOps teams utilizing SDLC tools such as Bamboo, JIRA, Bitbucket, and Confluence to ensure seamless integration of data systems into the software development lifecycle. Communicate effectively with both technical and non-technical stakeholders, for handover, incident management reporting, etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Demonstrated expertise in Big Data Technologies, specifically Apache Spark (focus on Spark SQL) and Hive. Extensive experience with AWS services, including S3, EC2, and EMR. Strong expertise in Data Warehousing and SQL, with experience in performance optimization Experience with ETL/ELT implementation (such as Talend) Proficiency in Linux, with a strong background in shell scripting Preferred technical and professional experience Familiarity with scheduling tools like Airflow or Control-M. Experience with metadata-driven frameworks. Knowledge of DevOps tools such as Bamboo, JIRA, Bitbucket, and Confluence. Excellent communication skills and a willing attitude towards learning

Posted 2 months ago

Apply

5 - 10 years

20 - 27 Lacs

Bengaluru

Work from Office

Naukri logo

About Zscaler Serving thousands of enterprise customers around the world including 40% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world’s largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange™ platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler. Our Engineering team built the world’s largest cloud security platform from the ground up, and we keep building. With more than 100 patents and big plans for enhancing services and increasing our global footprint, the team has made us and our multitenant architecture today's cloud security leader, with more than 15 million users in 185 countries. Bring your vision and passion to our team of cloud architects, software engineers, security experts, and more who are enabling organizations worldwide to harness speed and agility with a cloud-first strategy. We're looking for an experienced DevSecOps Engineer to join our team reporting to a Director of Engineering, you'll be responsible for: Designing/Architecting, Implementing, and supporting end to end CI/CD systems for mission critical distributed application deployments on Zscaler Private and Public clouds such as AWS, GCP Assisting developers with merging, resolving conflicts, creating and managing pre-commit hooks and own administration for DevSecOps tools such as (GitLab, GitHub, Bitbucket, Bamboo, Jenkins, Grafana, Prometheus, Artifactory, ArgoCD/Flux, etc) Security of the code, applications and infrastructure with a strong working experience in Security scanning (SAST/SCA/DAST) tools such as SonarQube, Snyk, BlackDuck, Coverity, CheckMarx, TruffleHog, etc Automating infrastructure provisioning and configuration (IaC) using tools like Terraform, Chef, Ansible, Puppet, etc Tracking and monitoring build metrics such as code coverage, build times, build queue times, usage/consumption for build agents, and chart them over time using tools such as Prometheus, Grafana, CloudWatch, Splunk, Loki, etc What We're Looking for (Minimum Qualifications) You would need a Bachelor of Engineering/Technology degree in Computer Science, Information Technology, or related field with at least 4 years hands-on experience in managing AWS, Google Cloud (GCP) and/or Private Cloud Environments Strong application development/Automation experience with one of the OOPS languages C/C++/Java/Python/GO Experience with SAST, SCA, DAST, Secret scans and familiarity with scanning tools such as SonarQube, Snyk, Coverity, BlackDuck, CheckMarx, TruffleHog, etc Experience with container orchestration technologies such as Docker, Podman, Kubernetes, EKS/GKE and proficiency in automation using tools such as Terrafrom, CloudFormation, Ansible, Chef, Puppet, etc Experience with Git and GitOps based pipelines using GitLab, GitHub, Bitbucket and CI automation tools like Jenkins, GitHub actions, Bamboo What Will Make You Stand Out (Preferred Qualifications) Experience writing and developing yaml based CI/CD Pipelines using GitLab, GitHub and knowledge of build tools like makefiles/gradle/npm/maven etc Experience with Networking, Load Balancers, Firewalls, Web Security Experience with AI and ML tools in day to day DevSecOps activities #LI-Onsite #LI-AC10 At Zscaler, we believe that diversity drives innovation, productivity, and success. We are looking for individuals from all backgrounds and identities to join our team and contribute to our mission to make doing business seamless and secure. We are guided by these principles as we create a representative and impactful team, and a culture where everyone belongs. For more information on our commitments to Diversity, Equity, Inclusion, and Belonging, visit the Corporate Responsibility page of our website. Our Benefits program is one of the most important ways we support our employees. Zscaler proudly offers comprehensive and inclusive benefits to meet the diverse needs of our employees and their families throughout their life stages, including: Various health plans Time off plans for vacation and sick time Parental leave options Retirement options Education reimbursement In-office perks, and more! By applying for this role, you adhere to applicable laws, regulations, and Zscaler policies, including those related to security and privacy standards and guidelines. Zscaler is proud to be an equal opportunity and affirmative action employer. We celebrate diversity and are committed to creating an inclusive environment for all of our employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy or related medical conditions), age, national origin, sexual orientation, gender identity or expression, genetic information, disability status, protected veteran status or any other characteristics protected by federal, state, or local laws. See more information by clicking on the Know Your Rights: Workplace Discrimination is Illegal link. Pay Transparency Zscaler complies with all applicable federal, state, and local pay transparency rules. For additional information about the federal requirements, click here . Zscaler is committed to providing reasonable support (called accommodations or adjustments) in our recruiting processes for candidates who are differently abled, have long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support.

Posted 2 months ago

Apply

6 - 10 years

22 - 27 Lacs

Chennai, Bengaluru

Hybrid

Naukri logo

We are seeking a skilled Splunk Engineer to join our team. The ideal candidate will have strong expertise in Splunk development technologies and practices, as well as experience in system monitoring, incident management, and mentoring. This role requires a deep understanding of Splunk infrastructure components and a solid background in software engineering and security practices. Key Responsibilities: Develop and maintain Splunk services and platforms to ensure availability and health. Participate in end-to-end system design and delivery. Manage incidents, problems, and defects, applying fixes and resolving systematic issues. Mentor and guide other engineers within the team. Onboard applications in Splunk, involving log ingestion, database queries, and transaction stitching. Create and manage Splunk dashboards and alerts. Utilize ITSI and Splunk data ingestion patterns like DBX, JMS-MQ, UF, files, HEC, etc. Administer Splunk infra components such as indexers, universal forwarders, heavy forwarders, search head clusters, cluster master, deployment servers, etc. Provide support for Splunk platforms, including problem and incident management. Use MongoDB and Elastic Search for data management. Utilize programming skills in CSS, JavaScript, Java, Python scripting, and Regex. Implement CI/CD tools such as GIT, BitBucket, Bamboo, Artifactory, and Ansible. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. 6-10 years of experience in Splunk engineering and related technologies. Proficiency in Splunk infrastructure, data ingestion, and dashboard creation. Strong problem-solving and analytical skills. Excellent communication and mentoring abilities. Exposure to New Relic is an added advantage.

Posted 2 months ago

Apply

7 - 12 years

9 - 14 Lacs

Jaipur

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Atlassian JIRA Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for managing the team and ensuring successful project delivery. Your typical day will involve collaborating with multiple teams, making key decisions, and providing solutions to problems for your immediate team and across multiple teams. With your expertise in Atlassian JIRA, you will play a crucial role in driving the success of our projects. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Lead the effort to design, build, and configure applications Act as the primary point of contact Manage the team and ensure successful project delivery Professional & Technical Skills: Must To Have Skills:Proficiency in Atlassian JIRA Strong understanding of software development principles and methodologies Experience in designing and implementing scalable and reliable applications Knowledge of Agile methodologies and experience in Agile development practices Experience in troubleshooting and resolving application issues Additional Information: The candidate should have a minimum of 7.5 years of experience in Atlassian JIRA This position is based at our Bengaluru office A 15 years full-time education is required Qualifications 15 years full time education

Posted 2 months ago

Apply

6 - 10 years

1000 Lacs

Bengaluru

Work from Office

Naukri logo

Overview At Zebra, we are a community of innovators who come together to create new ways of working to make everyday life better. United by curiosity and care, we develop dynamic solutions that anticipate our customer’s and partner’s needs and solve their challenges. Being a part of Zebra Nation means being seen, heard, valued, and respected. Drawing from our diverse perspectives, we collaborate to deliver on our purpose. Here you are a part of a team pushing boundaries to redefine the work of tomorrow for organizations, their employees, and those they serve. You have opportunities to learn and lead at a forward-thinking company, defining your path to a fulfilling career while channeling your skills toward causes that you care about – locally and globally. We’ve only begun reimaging the future – for our people, our customers, and the world. Let’s create tomorrow together. Analyzes, develops, designs, and maintains software for the organization's products and systems. Performs system integration of software and hardware to maintain throughput and program consistency. Develops, validates, and tests: structures and user documentation. Work is evaluated upon completion to ensure objectives have been met. Determines and develops approach to solutions. Responsibilities Experience with Scala, Kafka Streams and Akka Actors, GCP Data Storage: (GCP Buckets, MongoDB, MySQL, etc.) Experience with Google Cloud Platform (GCP) design and implementation Experience with Security in GCP Experience with Networking in GCP Lifecycle Management GKE and overall compute in GCP, including GCE Hands-on experience with microservices and distributed application architecture utilizing containers, Kubernetes, and/or serverless technology Experience with seamless/automated build scripts used for release management across all environments Experience with the full software development lifecycle and delivery using Agile practices In depth understanding of IP networking, VPN's, DNS, load balancing and firewalls Experience with multi-cloud architecture and deployment Experience developing cloud native CI/CD workflows and tools, such as Jenkins, Bamboo, Cloud Build (Google), etc. Establishes requirements for moderately complex software design projects. Prioritizes features to insure the most important get implemented Participates in code reviews and identifies bad sections early in the process and then recodes them Completes all phases of moderately complex software design projects. Carries out all in-process and final inspection activities Develops and tests documentation for the software projects Considers latest technologies and new approaches to designs and implementation of new designs Reviews changes or upgrades to existing software and/or firmware designs. Develops new technology to solve unique problems Provide recommendations and solutions to problems using experience in multiple technical areas Applies existing technology in new ways to improve performance and productivity May develop new tools to aid in the analysis and solving of problem Exercises judgment in selecting methods and techniques for obtaining solutions Receives little instruction on day-to-day work and general instructions on new assignments May influence the activities of junior level personnel (exempt professional and non-exempt) Networks with senior, internal and external, personnel in own area of expertise. Frequent inter-organizational and outside customer contacts Experience with Scala, Kafka Streams and Akka Actors, GCP Data Storage: (GCP Buckets, MongoDB, MySQL, etc.) Experience with Google Cloud Platform (GCP) design and implementation Experience with Security in GCP Experience with Networking in GCP Lifecycle Management GKE and overall compute in GCP, including GCE Hands-on experience with microservices and distributed application architecture utilizing containers, Kubernetes, and/or serverless technology Experience with seamless/automated build scripts used for release management across all environments Experience with the full software development lifecycle and delivery using Agile practices In depth understanding of IP networking, VPN's, DNS, load balancing and firewalls Experience with multi-cloud architecture and deployment Experience developing cloud native CI/CD workflows and tools, such as Jenkins, Bamboo, Cloud Build (Google), etc. Establishes requirements for moderately complex software design projects. Prioritizes features to insure the most important get implemented Participates in code reviews and identifies bad sections early in the process and then recodes them Completes all phases of moderately complex software design projects. Carries out all in-process and final inspection activities Develops and tests documentation for the software projects Considers latest technologies and new approaches to designs and implementation of new designs Reviews changes or upgrades to existing software and/or firmware designs. Develops new technology to solve unique problems Provide recommendations and solutions to problems using experience in multiple technical areas Applies existing technology in new ways to improve performance and productivity May develop new tools to aid in the analysis and solving of problem Exercises judgment in selecting methods and techniques for obtaining solutions Receives little instruction on day-to-day work and general instructions on new assignments May influence the activities of junior level personnel (exempt professional and non-exempt) Networks with senior, internal and external, personnel in own area of expertise. Frequent inter-organizational and outside customer contacts Experience with Scala, Kafka Streams and Akka Actors, GCP Data Storage: (GCP Buckets, MongoDB, MySQL, etc.) Experience with Google Cloud Platform (GCP) design and implementation Experience with Security in GCP Experience with Networking in GCP Lifecycle Management GKE and overall compute in GCP, including GCE Hands-on experience with microservices and distributed application architecture utilizing containers, Kubernetes, and/or serverless technology Experience with seamless/automated build scripts used for release management across all environments Experience with the full software development lifecycle and delivery using Agile practices In depth understanding of IP networking, VPN's, DNS, load balancing and firewalls Experience with multi-cloud architecture and deployment Experience developing cloud native CI/CD workflows and tools, such as Jenkins, Bamboo, Cloud Build (Google), etc. Establishes requirements for moderately complex software design projects. Prioritizes features to insure the most important get implemented Participates in code reviews and identifies bad sections early in the process and then recodes them Completes all phases of moderately complex software design projects. Carries out all in-process and final inspection activities Develops and tests documentation for the software projects Considers latest technologies and new approaches to designs and implementation of new designs Reviews changes or upgrades to existing software and/or firmware designs. Develops new technology to solve unique problems Provide recommendations and solutions to problems using experience in multiple technical areas Applies existing technology in new ways to improve performance and productivity May develop new tools to aid in the analysis and solving of problem Exercises judgment in selecting methods and techniques for obtaining solutions Receives little instruction on day-to-day work and general instructions on new assignments May influence the activities of junior level personnel (exempt professional and non-exempt) Networks with senior, internal and external, personnel in own area of expertise. Frequent inter-organizational and outside customer contacts Qualifications Minimum Education: Bachelor's degree or technical diploma in Computer Science, Electronic Engineering, Computer Engineering, or related field 6+ years’ experience working on an operations style team (NOC, SOC, MOC, etc.) and troubleshooting networking, service desk, operations center and/or supporting cloud based Infrastructure. Proficient with Scala Experience with Kafka Preferred Experience: Experience with Windows, Linux, web services, networking, Data bases, and cloud platforms (AWS, Azure, and GCP). Understanding of the following monitoring concepts: Infrastructure, systems, and Application health, system availability, latency, performance, and end-to-end monitoring. Entry level cloud, network or security certificate from Cisco, Microsoft, AWS, CompTIA, or other well-known vendors. Knowledge and experience of SIEM, ELK, PLG, and container orchestration platforms like Kubernetes are preferred Cyber Security related Certifications like Security+, SSCP, CCSP, and CEH. Knowledge of Markup, query, and scripting languages, including Python, HTML, PromQL, SQL and familiarity with REST API calls, and PowerShell. Experience (1+ years) with ITIL processes including Incident, Problem, Change, Knowledge and Event Management.

Posted 2 months ago

Apply

2 - 7 years

4 - 9 Lacs

Karnataka

Work from Office

Naukri logo

Description Skill-set ReST/API Concept and Principles Gateways API Security NodeJs MicroServices DevOps IaC (Terraform) CI/CD (Git Jenkins/Bamboo/CodeFresh) Docker Splunk/DynaTrace Cloud AWS/GCP EKS/GKE Communication Problem Solving Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade C Level To Be Defined Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility No Global Role Family 60236 (P) Software Engineering Local Role Name 6504 Developer / Software Engineer Local Skills 37105 Node JS Languages RequiredEnglish Role Rarity To Be Defined

Posted 2 months ago

Apply

2 - 7 years

4 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Description Role TitlePlaywright Developer Primary Skills Automation Testing Secondary Skills :API Testing Client Interview (Y/N)Y Detailed Skill Descriptions (With relevant years of experience) This person should have overall 5+ years of automation experience with 2+ years Playwright automation. Responsible for understanding existing automation framework, test planning and writing automated scripts using Playwright for various applications. This person should have the ability to track multiple test efforts simultaneously and to be able to synthesize the results in fast paced environment. This person should be Proficient in using Test management tools like JIRA/QMetry/MF ALM/Octane. Knowledge on Telecom mobile domain is preferred Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade B Level To Be Defined Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility No Global Role Family To be defined Local Role Name To be defined Local Skills Microsoft Playwright;test automation;Atlassian Jira Languages RequiredENGLISH Role Rarity To Be Defined

Posted 2 months ago

Apply

2 - 7 years

4 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Description Plugin DevelopmentAtlassian Products, Third-party plugins, Atlassian SDK/Forge Scripting Groovy (ScriptRunner Plugin for JIRA), JavaScript (customize default JIRA functionality) APIsAtlassian API, REST API Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade C Level LEVEL 3 - SENIOR6-9 Years Experience Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility No Global Role Family To be defined Local Role Name To be defined Local Skills Atlassian;SDK;Apache Groovy Languages RequiredENGLISH Role Rarity To Be Defined

Posted 2 months ago

Apply

2 - 7 years

4 - 9 Lacs

Pune

Work from Office

Naukri logo

Job TitleConsultant - Automation DeveloperLocationPune (Hybrid - 3days in a week at office, 2 days wfh, Candidate needs to report to onlyPune office) (Relocation is considerable)Shift Timings12:30 PM - 9:30 PM ISTBudgetLevel 3 (7 to 10 yrs) 28 LPALevel 4 (10+ to 12+ yrs) 31 LPALevel 5 (13 to 15+ yrs)- 36 LPAInterview2 rounds (HMs availability is between 3PM 5PM IST)Positions4Considerable Notice Period - 30 daysConsiderable Gap - Upto 6 Months1. Project Overview:Designing, developing, implementing, and maintaining core infrastructure automation componentsusing software engineering principles and methodology. Develop automation for differentinfrastructure technologies and services (Network, Storage, Middleware, Database etc.) usingclients approved automation tools and technologies. The ideal candidate would have excellentsoftware development skills and DevOps or automation experience within a medium to largeenterprise environment. 2. Role:Develop automation for different infrastructure technologies and services (Network, Storage,Middleware, Database etc.) using clients approved automation tools and technologies. This is afast-paced team, and the successful candidate must be able to act with a sense of urgency and beable to adapt to changing priorities. Qualified candidates must possess a keen sense of leadership,strong analytical skills, excellent communication skills, good technical abilities, and a can-doattitude.3. Experience LevelLevel 34. QualificationsBSc IT/BE any stream.5. Must Have Skills Overall - 7+ Years Relevant - 5+ Years 5+ years of Automation development experience using Ansible, Terraform, Chef, Python, ShellScripts etc as the automation tool. Develop automation for different infrastructure technologies and services (Network, Storage,Middleware, Database etc.) using clients approved automation tools and technologies. ITSM process experience utilizing ticketing tools like Remedy or Service-Now. At least 3 years in supporting BFSI Organization/Client6. Nice to HavesExperience with developing and automating CI/CD pipelines using GitHub Actions and Terraform. Experience in identifying drift patterns and implementing remediation workflows and experienceon CMDB. Knowledge in designing drift/configuration detection and management solution using Ansible,Chef, Terraform etc. Experience installing and administering applications in Unix, Linux, and Windows environmentsincluding debugging and command line Familiarity with IaaS (Infrastructure as a Service) or PaaS (Platform as a Service) Experience in implementing automated solutions for disaster recovery of web application stack. Familiarity with development tools, e.g., JIRA, BitBucket, Bamboo, Maven. Familiarity with Service-oriented Architecture (SOA), web services, SOAP, and JSON. Familiarity with REST API integration Familiarity with containerization concepts, Docker, Kubernetes a plus. 7. Tasks & responsibilitiesDesigning, developing, implementing, and maintaining core infrastructure automationcomponents using software engineering principles and methodology. Develop automation for different infrastructure technologies and services (Network, Storage,Middleware, Database etc.) using clients approved automation tools and technologies. Writing components that integrate with different Infrastructure components that constituteclients private and public cloud implementations using the interfaces exposed by thosecomponents (REST, Python etc.) The ideal candidate would have excellent software development skills and DevOps or automationexperience within a medium to large enterprise environment. This is a fast-paced team, and the successful candidate must be able to act with a sense of urgencyand be able to adapt to changing priorities. Qualified candidates must possess a keen sense ofleadership, strong analytical skills, excellent communication skills, good technical abilities, and acan-do attitude. Should be able to work effectively in onshore-offshore model using Agile software developmentmethodologies.

Posted 2 months ago

Apply

5 - 10 years

20 - 27 Lacs

Bengaluru

Work from Office

Naukri logo

About Zscaler Serving thousands of enterprise customers around the world including 40% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world’s largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange™ platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler. Our Engineering team built the world’s largest cloud security platform from the ground up, and we keep building. With more than 100 patents and big plans for enhancing services and increasing our global footprint, the team has made us and our multitenant architecture today's cloud security leader, with more than 15 million users in 185 countries. Bring your vision and passion to our team of cloud architects, software engineers, security experts, and more who are enabling organizations worldwide to harness speed and agility with a cloud-first strategy. We're looking for an experienced DevSecOps Engineer to join our team reporting to a Director of Engineering, you'll be responsible for: Designing/Architecting, Implementing, and supporting end to end CI/CD systems for mission critical distributed application deployments on Zscaler Private and Public clouds such as AWS, GCP Assisting developers with merging, resolving conflicts, creating and managing pre-commit hooks and own administration for DevSecOps tools such as (GitLab, GitHub, Bitbucket, Bamboo, Jenkins, Grafana, Prometheus, Artifactory, ArgoCD/Flux, etc) Security of the code, applications and infrastructure with a strong working experience in Security scanning (SAST/SCA/DAST) tools such as SonarQube, Snyk, BlackDuck, Coverity, CheckMarx, TruffleHog, etc Automating infrastructure provisioning and configuration (IaC) using tools like Terraform, Chef, Ansible, Puppet, etc Tracking and monitoring build metrics such as code coverage, build times, build queue times, usage/consumption for build agents, and chart them over time using tools such as Prometheus, Grafana, CloudWatch, Splunk, Loki, etc What We're Looking for (Minimum Qualifications) You would need a Bachelor of Engineering/Technology degree in Computer Science, Information Technology, or related field (or equivalent work experience) with at least 4 years hands-on experience in managing AWS, Google Cloud (GCP) and/or Private Cloud Environments. Strong application development/Automation experience with one of the OOPS languages C/C++/Java/Python/GO Experience with SAST, SCA, DAST, Secret scans and familiarity with the scanning tools such as SonarQube, Snyk, Coverity, BlackDuck, CheckMarx, TruffleHog, etc Experience with container orchestration technologies such as Docker, Podman, Kubernetes, EKS/GKE and Proficiency in Infrastructure and Configuration automation using tools such as Terrafrom, CloudFormation, Ansible, Chef, Puppet, etc Experience with Git and GitOps based pipelines using GitLab, GitHub, Bitbucket and CI automation tools like Jenkins, GitHub actions, Bamboo What Will Make You Stand Out (Preferred Qualifications) Experience writing and developing yaml based CI/CD Pipelines using GitLab, GitHub and knowledge of build tools like makefiles/gradle/npm/maven etc Experience with Networking, Load Balancers, Firewalls, Web Security Experience with AI and ML tools in day to day DevSecOps activities #LI-Onsite #LI-AC10 At Zscaler, we believe that diversity drives innovation, productivity, and success. We are looking for individuals from all backgrounds and identities to join our team and contribute to our mission to make doing business seamless and secure. We are guided by these principles as we create a representative and impactful team, and a culture where everyone belongs. For more information on our commitments to Diversity, Equity, Inclusion, and Belonging, visit the Corporate Responsibility page of our website. Our Benefits program is one of the most important ways we support our employees. Zscaler proudly offers comprehensive and inclusive benefits to meet the diverse needs of our employees and their families throughout their life stages, including: Various health plans Time off plans for vacation and sick time Parental leave options Retirement options Education reimbursement In-office perks, and more! By applying for this role, you adhere to applicable laws, regulations, and Zscaler policies, including those related to security and privacy standards and guidelines. Zscaler is proud to be an equal opportunity and affirmative action employer. We celebrate diversity and are committed to creating an inclusive environment for all of our employees. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex (including pregnancy or related medical conditions), age, national origin, sexual orientation, gender identity or expression, genetic information, disability status, protected veteran status or any other characteristics protected by federal, state, or local laws. See more information by clicking on the Know Your Rights: Workplace Discrimination is Illegal link. Pay Transparency Zscaler complies with all applicable federal, state, and local pay transparency rules. For additional information about the federal requirements, click here . Zscaler is committed to providing reasonable support (called accommodations or adjustments) in our recruiting processes for candidates who are differently abled, have long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support.

Posted 2 months ago

Apply

4 - 9 years

15 - 20 Lacs

Bengaluru

Remote

Naukri logo

Required Qualifications: Experience: 5+ years of experience in functional testing with a focus on test automation using Eggplant. Automation Expertise: Hands-on experience with Eggplant Functional for automating functional test cases. Testing Methodologies: Strong understanding of functional testing techniques, regression testing, and exploratory testing. Manual & Automated Testing: Experience in both manual testing and automating test cases using Eggplant or similar tools. Defect Management: Familiarity with defect tracking tools like JIRA, Bugzilla, or HP ALM. Scripting Skills: Knowledge of SenseTalk (Eggplants scripting language) for creating and maintaining automation scripts. Agile Methodologies: Experience working in Agile development environments, participating in sprints, and working with cross-functional teams. Preferred Qualifications: Test Management Tools: Experience with test management tools like TestRail, HP ALM, or Zephyr. Continuous Testing: Familiarity with integrating Eggplant automation into CI/CD pipelines (using Jenkins, Bamboo, or similar tools). Other Tools: Exposure to other automation testing tools such as Selenium, Appium, or UFT is a plus. Certifications: ISTQB or other relevant QA/testing certifications are preferred.

Posted 2 months ago

Apply

4 - 7 years

6 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Skills: . DevOps Engineer, Bamboo, Jenkins, Java, Python, CICD, Artifactory,. MicroGenesis Techsoft Pvt. Ltd are pioneers in the field of technology service provider such as Application Lifecycle Management, DevOps, Software and Systems Engineering, Robotic Process Automation etc. We have an open position for the role of DevOps Engineer in Bangalore location. Primary Skills. Application Technologies:Sound knowledge and working experience with Jira, Confluence, Bitbucket, Bamboo, and Artifactory tools. CI/CD Processes:5+ years of experience in CI/CD processes and practices. Scripting/Programming Languages:Proficiency in JavaScript, Perl, PowerShell, Bash, and Python for automating CI/CD pipelines. Version Control Systems:Good understanding of distributed version control systems like GIT. DevOps Best Practices:Experience in designing, developing, and implementing best practices and automation for the DevOps pipeline. Job Responsibilities. DevOps Processes:Define best-in-class DevOps processes, analyze current DevOps processes, and identify gaps. Open-Source Technologies:Utilize various open source technologies effectively. Automation:Develop scripts and automate tasks using PowerShell, Python, Bash, and Linux. System Understanding:Understand the workings of various systems and manage IT operations. Source Control Management:Manage source control, including Bitbucket and GIT. End-to-End Automation:Develop automation workflows that cover complete end-to-end processes. Agile Methodologies:Knowledge of Agile methodologies such as SCRUM and capability to develop and analyze specifications and requirements. Software Development Tools:Experience and sound knowledge of software development tools like SCM, Release Management, Unit Testing, Integration Testing, Software Validation, emulators, compilers, and software testing. Technical Leadership:Provide technical leadership and set priorities/work assignments for the DevOps team. Collaboration:Work with development teams to support software/DevOps activities. Planning and Scheduling:Create and maintain DevOps development plans and schedules. Technical Improvements:Track, implement, and lead the team in technical DevOps improvements. Training:Responsible for providing training on DevOps tools and processes. Issue Resolution:Identify DevOps requirements and performance issues and work with internal and external teams to resolve issues in a timely manner. Support:Perform work in ticket systems to address maintenance and support issues. Qualifications. Bachelors degree in Computer Science, Information Technology, or related field. 5+ years of experience in DevOps, with a focus on CI/CD processes. Strong scripting and programming skills in JavaScript, Perl, PowerShell, Bash, and Python. Proficiency with Jira, Confluence, Bitbucket, Bamboo, and Artifactory. Experience with Agile methodologies, particularly SCRUM. Excellent communication, collaboration, and problem-solving skills. Ability to lead and mentor team members in DevOps practices and tools

Posted 2 months ago

Apply

8 - 13 years

10 - 15 Lacs

Uttar Pradesh

Work from Office

Naukri logo

Application Security (DevSecOps) Specialist About The Role : Primary responsibilities will be assisting with the delivery of DevSecOps and strategic AppSec projects. This includes performing DevSecOps/Agile and AppSec Program Assessments, performing Architecture Reviews and Threat Modeling, designing DevSecOps pipelines, assisting with large scale DevSecOps transformations, performing secure configuration reviews, and providing technical leadership within our Application Security team. The Application Security Specialist will deliver the aforementioned services, deliver comprehensive reports, and provide thought leadership within the Application Security and DevSecOps space. You will spend your time focusing on challenging projects and solving complex problems. Role Requirements Assist with the performance of Application Security services, including but not limited to DevSecOps and Application Security Program Assessments, Application Architecture Reviews, Threat Modeling, designing industry leading Application Security programs, Secure SDLC Implementation, Security Configuration Reviews, AppSec related training Contribute to comprehensive assessment deliverables that are proficiently tailored to both technical and managerial audiences and fully detail the technical execution, core deficiencies, business impact, and realistic remediation strategies Awareness and understanding of the rapidly changing application security landscape, including open source and commercial tools, assessment methodologies and approaches, and strategy frameworks, such as OWASP SAMM, and BSIMM Familiarity with common Agile development methodologies, such as the Scaled Agile Framework Familiarity with common DevSecOps related tooling including but not limited to continuous integration tooling (AzureDevOps, Jenkins, Bamboo), QA testing frameworks and tools (Cucumber, NUnit, JUnit), automated application security testing tools (SAST, DAST, IAST, OSA), defect tracking systems (JIRA, Azure DevOps), and containerization technologies Understanding of a broad range of application security issues, mitigation strategies, and common application security controls Perpetually strengthen relevant skills, knowledge, and abilities to stay at the forefront of the information security industry Foster stakeholder relationships with the development teams by providing support, information, and guidance Maintain a strong desire to learn, adapt, and improve Perform other duties as assigned Education, Credentials, and Experience Direct hands on experience (8+ years) in performing Application Security service offerings, including but not limited to DevSecOps implementations, tool automation, application threat modeling, application architecture reviews, and program assessments Experience and working knowledge of Application Security controls, application architectures, database architectures, security requirements, and industry standards and frameworks Operational DevSecOps experience Hands on experience with a broad range of DevOps tooling that is necessary to support scalable application security, such as containerization technologies, continuous integration tools, source code repositories, defect tracking systems, and QA testing tools Strong communication skills that include the ability to clearly articulate thoughts and distill complex problems into digestible pieces of information during live conversations and formal deliverables Bachelors degree in a relevant discipline or equivalent experience

Posted 3 months ago

Apply

5 - 10 years

5 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Greetings from Euclid!! We are Hiring!!!!! Role: Package Consultant Salesforce / Salesforce Developer Skill: Salesforce Total & Rel Experience: 6 Yrs & 5 Years Location: Bangalore | Max 30 Days Notice Mode of Interview: F2F (Any metropolitan city) Mode of Work: Work from Office Only Job Description 1) Sharing, Apex, Aura, Trigger Framework, JavaScript and Rest integration at the core. 2) Good analytical and logical skills for analysing and debugging. 3) Good Communication skills 4) Deployment: GIT, Changeset, bamboo or TeamCity Please apply or any Reference only for below Criteria: if Ready to attend F2F Interview and ready to work from Office, then share Updated Resume along with Pan card.

Posted 3 months ago

Apply

7 - 12 years

12 - 22 Lacs

Bengaluru

Work from Office

Naukri logo

Greetings from Sun Technologies. Position : Jira Admin Experience : 7+ Years Job Type : Full Time/ Permanent Location : Sun Technologies, Hennur Cross, Bangalore Working Hours : 5:30 PM to 2:30 AM/ 9:30 PM to 6:30 AM (Free cab with dinner facility from company) Work Mode : Work from Office (WFO) 5 days Interview Mode : Virtual Mandatory Skills : Jira, Confluence, Groovy Scripting, REST API, Script runner, Power script Job Description: Seeking an experienced JIRA/Confluence professional with 5+ years supporting Atlassian products in a medium/large enterprise computing environment. Must haves and experience with:- Creating custom projects in JIRA with complex workflows and permission schemes to meet business needs. Performs testing of JIRA project implementation and project enhancements Communicate with stakeholders to identify JIRA and Confluence project needs and implement them as appropriate Performs analysis, diagnosis, and resolution of incoming JIRA/Confluence requests for additional functionality, including creation and management of custom projects and workflows Provides training to JIRA users and other JIRA team members (both formal and informal) Identifies, tests, and provide recommendations on plug-ins to meet project requirements Maintain JIRA/Confluence projects, workflows, and permissions Review and improve security within JIRA projects and Confluence spaces Provides ideas for improving JIRA and Confluence instances Ability to write custom JQL and understanding of SQL Backend automation experience utilizing Cron, Python, REST APIs or other popular scripting languages. Red Hat Enterprise Linux (RHEL) server administration Proven track record with Atlassian installations, migrations, and upgrades Top 5 Skills: Hands-on experience with creating and supporting scripting/automation tools Experience with Agile Methodologies and software development cycles Linux Server Administration experience Jira Server and Jira Cloud Administration experience Proven Atlassian tools integration experience About Company: Established in 1996, Sun Technologies Inc. is recognized as an award-winning innovative IT solutions company, specializing in Infrastructure Management Services, Gaming Services, Application Development and Application Testing Services with niche expertise in storage, virtualization, middleware and database. With highly skilled resources, innovative business models, we assist our customers to increase revenues, enhance brand value and stay ahead from competitors. www.suntechnologies.com If interested please share resumes to chandanap@suntechnologies.com

Posted 3 months ago

Apply

8 - 13 years

0 Lacs

Chennai, Bengaluru, Hyderabad

Work from Office

Naukri logo

Hiring for Jira Administrator - C2H Position - PAN India Exp:8yrs Looking for Jira Atllassian Administartor. Atlassian Certificate is mandate. Interseted can share resumes to npreethi@radiants.com Regards, Preethi N

Posted 3 months ago

Apply

2 - 6 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

Design, implement, and manage large-scale data processing systems using Big Data Technologies such as Hadoop, Apache Spark, and Hive. Develop and manage our database infrastructure based on Relational Database Management Systems (RDBMS), with strong expertise in SQL. Utilize scheduling tools like Airflow, Control M, or shell scripting to automate data pipelines and workflows. Write efficient code in Python and/or Scala for data manipulation and processing tasks. Leverage AWS services including S3, Redshift, and EMR to create scalable, cost-effective data storage and processing solutions Required education Bachelor's Degree Required technical and professional expertise Proficiency in Big Data Technologies, including Hadoop, Apache Spark, and Hive. Strong understanding of AWS services, particularly S3, Redshift, and EMR. Deep expertise in RDBMS and SQL, with a proven track record in database management and query optimization. Experience using scheduling tools such as Airflow, Control M, or shell scripting. Practical experience in Python and/or Scala programming languages Preferred technical and professional experience Knowledge of Core Java (1.8 preferred) is highly desired Excellent communication skills and a willing attitude towards learning. Solid experience in Linux and shell scripting. Experience with PySpark or Spark is nice to have Familiarity with DevOps tools including Bamboo, JIRA, Git, Confluence, and Bitbucket is nice to have Experience in data modelling, data quality assurance, and load assurance is a nice-to-have

Posted 3 months ago

Apply

2 - 6 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

Design, implement, and manage large-scale data processing systems using Big Data Technologies such as Hadoop, Apache Spark, and Hive. Develop and manage our database infrastructure based on Relational Database Management Systems (RDBMS), with strong expertise in SQL. Utilize scheduling tools like Airflow, Control M, or shell scripting to automate data pipelines and workflows. Write efficient code in Python and/or Scala for data manipulation and processing tasks. Leverage AWS services including S3, Redshift, and EMR to create scalable, cost-effective data storage and processing solutions Required education Bachelor's Degree Required technical and professional expertise Proficiency in Big Data Technologies, including Hadoop, Apache Spark, and Hive. Strong understanding of AWS services, particularly S3, Redshift, and EMR. Deep expertise in RDBMS and SQL, with a proven track record in database management and query optimization. Experience using scheduling tools such as Airflow, Control M, or shell scripting. Practical experience in Python and/or Scala programming languages Preferred technical and professional experience Knowledge of Core Java (1.8 preferred) is highly desired Excellent communication skills and a willing attitude towards learning. Solid experience in Linux and shell scripting. Experience with PySpark or Spark is nice to have Familiarity with DevOps tools including Bamboo, JIRA, Git, Confluence, and Bitbucket is nice to have Experience in data modelling, data quality assurance, and load assurance is a nice-to-have

Posted 3 months ago

Apply

2 - 4 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

Design, implement, and manage large-scale data processing systems using Big Data Technologies such as Hadoop, Apache Spark, and Hive. Develop and manage our database infrastructure based on Relational Database Management Systems (RDBMS), with strong expertise in SQL. Utilize scheduling tools like Airflow, Control M, or shell scripting to automate data pipelines and workflows. Write efficient code in Python and/or Scala for data manipulation and processing tasks. Leverage AWS services including S3, Redshift, and EMR to create scalable, cost-effective data storage and processing solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proficiency in Big Data Technologies, including Hadoop, Apache Spark, and Hive. Strong understanding of AWS services, particularly S3, Redshift, and EMR. Deep expertise in RDBMS and SQL, with a proven track record in database management and query optimization. Experience using scheduling tools such as Airflow, Control M, or shell scripting. Practical experience in Python and/or Scala programming languages Preferred technical and professional experience Knowledge of Core Java (1.8 preferred) is highly desired Excellent communication skills and a willing attitude towards learning. Solid experience in Linux and shell scripting. Experience with PySpark or Spark is nice to have Familiarity with DevOps tools including Bamboo, JIRA, Git, Confluence, and Bitbucket is nice to have Experience in data modelling, data quality assurance, and load assurance is a nice-to-have

Posted 3 months ago

Apply

2 - 4 years

4 - 6 Lacs

Bengaluru

Work from Office

Naukri logo

Design, implement, and manage large-scale data processing systems using Big Data Technologies such as Hadoop, Apache Spark, and Hive. Develop and manage our database infrastructure based on Relational Database Management Systems (RDBMS), with strong expertise in SQL. Utilize scheduling tools like Airflow, Control M, or shell scripting to automate data pipelines and workflows. Write efficient code in Python and/or Scala for data manipulation and processing tasks. Leverage AWS services including S3, Redshift, and EMR to create scalable, cost-effective data storage and processing solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proficiency in Big Data Technologies, including Hadoop, Apache Spark, and Hive. Strong understanding of AWS services, particularly S3, Redshift, and EMR. Deep expertise in RDBMS and SQL, with a proven track record in database management and query optimization. Experience using scheduling tools such as Airflow, Control M, or shell scripting. Practical experience in Python and/or Scala programming languages Preferred technical and professional experience Knowledge of Core Java (1.8 preferred) is highly desired Excellent communication skills and a willing attitude towards learning. Solid experience in Linux and shell scripting. Experience with PySpark or Spark is nice to haveFamiliarity with DevOps tools including Bamboo, JIRA, Git, Confluence, and Bitbucket is nice to have Experience in data modelling, data quality assurance, and load assurance is a nice-to-have

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies