Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Your potential, unleashed. India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realize your potential amongst cutting edge leaders, and organizations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The team Deloitte helps organizations prevent cyberattacks and protect valuable assets. We believe in being secure, vigilant, and resilient—not only by looking at how to prevent and respond to attacks, but at how to manage cyber risk in a way that allows you to unleash new opportunities. Embed cyber risk at the start of strategy development for more effective management of information and technology risks Your work profile As Assistant Manager in our Cyber Team you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations: - We are looking for a skilled Cribl Data Engineer to design, manage, and optimize data pipelines that process and route machine data at scale. The ideal candidate will have hands-on experience with Cribl Stream , Cribl Edge , or Cribl Search , and a strong understanding of telemetry data workflows, observability tools, and data platforms like Splunk, Sentinel, Elastic, or S3. Design and build streaming data pipelines using Cribl Stream for routing, transforming, and enriching logs, metrics, and trace data. Configure data sources (e.g., Syslog, HEC, TCP, S3, Kafka) and destinations (e.g., Splunk, Sentinel, Elasticsearch, Data Lakes). Develop pipelines, routes, packs, and knowledge objects using Cribl’s UI and scripting features. Optimize data ingestion workflows to reduce costs, improve performance, and enhance data usability. Implement filtering, masking, sampling, and transformation logic using Cribl Functions (Regex, Eval, Lookup, JSON, etc.). Work with SIEM and observability teams to ensure clean, enriched, and correctly formatted data flows into tools like Splunk, Sentinel, S3, or OpenSearch. Monitor Cribl infrastructure and debug pipeline issues in real time using Cribl Monitoring and Health Checks. Implement version control, testing, and CI/CD for Cribl pipelines (using GitHub or GitLab). Participate in PoC evaluations, vendor integrations, and best practices documentation.\ Desired qualifications Education: Bachelor’s degree in Information Security, Computer Science, or a related field. A Master’s degree in Cybersecurity or Business Management is preferred. Experience: 3 to 5 Year Hands-on experience with Cribl Stream and knowledge of Cribl Edge or Cribl Search. Strong understanding of log formats (Syslog, JSON, CSV, Windows Event Logs, etc.) Familiarity with SIEM platforms like Splunk, Microsoft Sentinel, Elastic Stack, QRadar, or Exabeam. Proficient in regex, JSON transformations, and scripting logic. Comfortable with cloud platforms (AWS/Azure/GCP) and object storage systems (e.g., S3, Azure Blob). Familiarity with Kafka, Fluentd, Fluent Bit, Logstash, or similar tools is a plus. Location and way of working Base location: Noida/Gurgaon Professional is required to work from office. Your role as a Assistant Manager We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society. In addition to living our purpose, Senior Executive across our organization must strive to be: Inspiring - Leading with integrity to build inclusion and motivation Committed to creating purpose - Creating a sense of vision and purpose Agile - Achieving high-quality results through collaboration and Team unity Skilled at building diverse capability - Developing diverse capabilities for the future Persuasive / Influencing - Persuading and influencing stakeholders Collaborating - Partnering to build new solutions Delivering value - Showing commercial acumen Committed to expanding business - Leveraging new business opportunities Analytical Acumen - Leveraging data to recommend impactful approach and solutions through the power of analysis and visualization Effective communication – Must be well abled to have well-structured and well-articulated conversations to achieve win-win possibilities Engagement Management / Delivery Excellence - Effectively managing engagement(s) to ensure timely and proactive execution as well as course correction for the success of engagement(s) Managing change - Responding to changing environment with resilience Managing Quality & Risk - Delivering high quality results and mitigating risks with utmost integrity and precision Strategic Thinking & Problem Solving - Applying strategic mindset to solve business issues and complex problems Tech Savvy - Leveraging ethical technology practices to deliver high impact for clients and for Deloitte Empathetic leadership and inclusivity - creating a safe and thriving environment where everyone's valued for who they are, use empathy to understand others to adapt our behaviors and attitudes to become more inclusive. How you’ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report. Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone’s welcome… entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you. Interview tips We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organisation and the business area you’re applying to. Check out recruiting tips from Deloitte professionals. *Caution against fraudulent job offers*: We would like to advise career aspirants to exercise caution against fraudulent job offers or unscrupulous practices. At Deloitte, ethics and integrity are fundamental and not negotiable. We do not charge any fee or seek any deposits, advance, or money from any career aspirant in relation to our recruitment process. We have not authorized any party or person to collect any money from career aspirants in any form whatsoever for promises of getting jobs in Deloitte or for being considered against roles in Deloitte. We follow a professional recruitment process, provide a fair opportunity to eligible applicants and consider candidates only on merit. No one other than an authorized official of Deloitte is permitted to offer or confirm any job offer from Deloitte. We advise career aspirants to exercise caution. Show more Show less
Posted 1 week ago
0 years
3 - 9 Lacs
Bengaluru
On-site
Bangalore,Karnataka,India Job ID 766747 About this opportunity: This position plays a crucial role in the development of Python-based solutions, their deployment within a Kubernetes-based environment, and ensuring the smooth data flow for our machine learning and data science initiatives. The ideal candidate will possess a strong foundation in Python programming, hands-on experience with ElasticSearch, Logstash, and Kibana (ELK), a solid grasp of fundamental Spark concepts, and familiarity with visualization tools such as Grafana and Kibana. Furthermore, a background in ML Ops and expertise in both machine learning model development and deployment will be highly advantageous What you will do: Generative AI & LLM Development, 12-15 Yrs of experience as Enterprise Software Architect with strong hands-on experience Strong hands-on experience in Python and microservice architecture concepts and development Expertise in crafting technical guides, architecture designs for AI platform Experience in Elastic Stack , Cassandra or any Big Data tool Experience with advance distributed systems and tooling, for example, Prometheus, Terraform, Kubernetes, Helm, Vault, CI/CD systems. Prior experience to build multiple AI/ML based models and deployed the models into production environment and creating the data pipelines Experience in guiding teams working on AI, ML, BigData and Analytics Strong understanding of development practices like architecture design, coding, test and verification. Experience with delivering software products, for example release management, documentation What you will Bring: Python Development: Write clean, efficient, and maintainable Python code to support data engineering tasks, including data collection, transformation, and integration with machine learning models. Data Pipeline Development: Design, develop, and maintain robust data pipelines that efficiently gather, process, and transform data from various sources into a format suitable for machine learning and data science tasks using ELK stack, Python and other leading technologies. Spark Knowledge: Apply basic Spark concepts for distributed data processing when necessary, optimizing data workflows for performance and scalability. ELK Integration: Utilize ElasticSearch, Logstash, and Kibana (ELK) for data management, data indexing, and real-time data visualization. Knowledge of OpenSearch and related stack would be beneficial. Grafana and Kibana: Create and manage dashboards and visualizations using Grafana and Kibana to provide real-time insights into data and system performance. Kubernetes Deployment: Deploy data engineering solutions and machine learning models to a Kubernetes-based environment, ensuring security, scalability, reliability, and high availability. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 1 week ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Ontic makes software that corporate and government security professionals use to proactively manage threats, mitigate risks, and make businesses stronger. Built by security and software professionals, the Ontic Platform connects and unifies critical data, business processes, and collaborators in one place, consolidating security intelligence and operations. We call this Connected Intelligence. Ontic serves corporate security teams across key functions, including intelligence, investigations, GSOC, executive protection, and security operations. As Ontic employees, we put our mission first and value the trust bestowed upon us by our clients to help keep their people safe. We approach our clients and each other with empathy while focusing on the execution of our strategy. And we have fun doing it. Who We Are Ontic is the first protective intelligence software company to digitally transform how Fortune 500 and emerging enterprises proactively address physical threat management to protect employees, customers and assets. Ontic’s SaaS-based platform collects and connects threat indicators to provide a comprehensive view of potential threats while surfacing critical knowledge so companies can assess and action more to maintain business continuity and reduce financial impact. Ontic also provides strategic consulting, multidimensional services, education and thought leadership for safety and security professionals at major corporations via its Center for Protective Intelligence. For more information please visit ontic.co As Ontic employees, we put our mission first and value the trust bestowed upon us by our clients to help keep their people safe. We approach both our clients and each other with empathy while focusing on the execution of our strategy. And we have fun doing it. As a DevOps Engineer at Ontic, you will be bridging the gap between development and operations, ensuring the highest level of security for our cloud-based services for government clients. You would be part of major cloud migration projects, moving critical services to Cloud. Ontic's DevOps Engineer will be an integral part of our global team, collaborating closely with our DevOps professionals in India and the U.S. This individual will contribute to ongoing projects and support initiatives, ensuring alignment and knowledge sharing across the team What you should have 4+ years of experience in DevOps Proficient in Linux Hands-on experience with AWS cloud or Google Cloud Experience of technology like Docker, Kubernetes is must Experience in Ansible, Terraform is must Excellent programming (Python, Go, Shell) and automation skills Working knowledge of web servers (Nginx/Apache), networking, and version control system like Gitlab or GitHub Proficient experience in Oracle Cloud is preferred Experience in CI/CD (Jenkins), GitHub actions, Argo CD Experience in L1 issues troubleshooting Experience in Mongo, Elasticsearch, Kafka is must Expertise with monitoring & logging tools like Kibana, Prometheus, Grafana, Logstash, new Relic Awareness of Cloud Security Best Practices What you will be doing at Ontic as DevOps Engineer Deployment of various infrastructures on Cloud platforms like AWS, GCP Server monitoring, analysis, and troubleshooting Integration of Container technologies like Docker, Kubernetes Automation using Go, Python or Bash CI/CD integration for applications Database administration Maintain application SLA Ontic is an equal-opportunity employer. We are committed to a work environment that celebrates diversity. We do not discriminate against any individual based on race, color, sex, national origin, age, religion, marital status, sexual orientation, gender identity, gender expression, military or veteran status, disability, or any factors protected by applicable law. All Ontic employees are expected to understand and adhere to all Ontic Security and Privacy related policies in order to protect Ontic data and our clients data. Show more Show less
Posted 1 week ago
3.0 - 6.0 years
4 - 8 Lacs
Bengaluru
Work from Office
We are looking for a Kibana Subject Matter Expert (SME) to support our Network Operations Center (NOC) by designing, developing, and maintaining real-time dashboards and alerting mechanisms. The ideal candidate will have strong experience in working with Elasticsearch and Kibana to visualize key performance indicators (KPIs), system health, and alerts related to NOC-managed infrastructure. Key Responsibilities: Design and develop dynamic and interactive Kibana dashboards tailored for NOC monitoring. Integrate various NOC elements such as network devices, servers, applications, and services into Elasticsearch/Kibana. Create real-time visualizations and trend reports for system health, uptime, traffic, errors, and performance metrics. Configure alerts and anomaly detection mechanisms for critical infrastructure issues using Kibana or related tools (e.g., ElastAlert, Watcher). Collaborate with NOC engineers, infrastructure teams, and DevOps to understand monitoring requirements and deliver customized dashboards. Optimize Elasticsearch queries and index mappings for performance and data integrity. Provide expert guidance on best practices for log ingestion, parsing, and data retention strategies. Support troubleshooting and incident response efforts by providing actionable insights through Kibana visualizations. Primary Skills Proven experience as a Kibana SME or similar role with a focus on dashboards and alerting. Strong hands-on experience with Elasticsearch and Kibana (7.x or higher). Experience in working with log ingestion tools (e.g., Logstash, Beats, Fluentd). Solid understanding of NOC operations and common infrastructure elements (routers, switches, firewalls, servers, etc.). Proficiency in JSON, Elasticsearch Query DSL, and Kibana scripting for advanced visualizations. Familiarity with alerting frameworks such as ElastAlert, Kibana Alerting, or Watcher. Good understanding of Linux-based systems and networking fundamentals. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Preferred Qualifications: Experience in working within telecom, ISP, or large-scale IT operations environments. Exposure to Grafana, Prometheus, or other monitoring and visualization tools. Knowledge of scripting languages such as Python or Shell for automation. Familiarity with SIEM or security monitoring solutions.
Posted 1 week ago
5.0 - 10.0 years
7 - 11 Lacs
Hyderabad
Work from Office
JOB DESCRIPTION : We are looking for an experienced Senior Java Developer with a strong background in observability and telemetry to join our talented team. In this role, you will be responsible for designing, implementing, and maintaining robust and scalable solutions that enable us to gain deep insights into the performance, reliability, and health of our systems and applications. WHAT'S IN' IT FOR YOU : - You will get a pivotal role in the project and associated incentives based on your contribution towards the project success. - Working on optimizing performance of a platform handling data volume in the range of 5-8 petabytes. - An opportunity to collaborate and work with engineers from Google, AWS, ELK - You will be enabled to take-up leadership role in future to set-up your team as you grow with the customer during the project engagement. - Opportunity for advancement within the company, with clear paths for career progression based on performance and demonstrated capabilities. - Be part of a company that values innovation and encourages experimentation, where your ideas are heard and your contributions are recognized and rewarded. Work in a zero micro-management culture where you get to enjoy accountability and ownership for your tasks RESPONSIBILITIES : - Design, develop, and maintain Java-based microservices and applications with a focus on observability and telemetry. - Implement best practices for instrumenting, collecting, analyzing, and visualizing telemetry data (metrics, logs, traces) to monitor and troubleshoot system behavior and performance. - Collaborate with cross-functional teams to integrate observability solutions into the software development lifecycle, including CI/CD pipelines and automated testing frameworks. - Drive improvements in system reliability, scalability, and performance through data-driven insights and continuous feedback loops. - Stay up-to-date with emerging technologies and industry trends in observability, telemetry, and distributed systems to ensure our systems remain at the forefront of innovation. - Mentor junior developers and provide technical guidance and expertise in observability and telemetry practices. REQUIREMENTS : - Bachelor's or Master's degree in Computer Science, Engineering, or related field. - 5+ years of professional experience in software development with a strong focus on Java programming. - Expertise in observability and telemetry tools and practices, including but not limited to Prometheus, Grafana, Jaeger, ELK stack (Elasticsearch, Logstash, Kibana), and distributed tracing. - Solid understanding of microservices architecture, containerization (Docker, Kubernetes), and cloud- native technologies (AWS, Azure, GCP). - Proficiency in designing and implementing scalable, high-performance, and fault-tolerant systems. -Strong analytical and problem-solving skills with a passion for troubleshooting complex issues. - Excellent communication and collaboration skills with the ability to work effectively in a fast-paced, agile environment. - Experience with Agile methodologies and DevOps practices is a plus. Location: Others- Delhi / NCR,Bangalore/Bengaluru,Hyderabad/Secunderabad,Chennai,Pune,Kolkata,Ahmedabad,Mumbai
Posted 1 week ago
5.0 years
0 Lacs
Delhi, India
Remote
Elastic, the Search AI Company, enables everyone to find the answers they need in real time, using all their data, at scale — unleashing the potential of businesses and people. The Elastic Search AI Platform, used by more than 50% of the Fortune 500, brings together the precision of search and the intelligence of AI to enable everyone to accelerate the results that matter. By taking advantage of all structured and unstructured data — securing and protecting private information more effectively — Elastic’s complete, cloud-based solutions for search, security, and observability help organizations deliver on the promise of AI. What Is The Role You will have the opportunity to work with a tremendous services, engineering, and sales team and wear many hats. This is a meaningful role, as a Consulting Architect, Observability you have an outstanding chance to create an immediate impact on the success of Elastic and our customers. What You Will Be Doing Deliver Elastic solutions and elastic stack expertise to drive customer business value from our products Work with clients to facilitate strategy, roadmap, design, and capacity planning in mission-critical environments workshops Strong customer advocacy, relationship building, and communications skills Comfortable working remotely in a highly distributed team Development of demos and proof-of-concepts that highlight the value of the Elastic Stack and Solutions Elastic solutions adaption and acceleration along with data modeling, query development and optimization, cluster tuning and scaling with a focus on fast search and analytics at scale Drive and manage the objectives, requirements gathering, project tasks/milestone, project status, dependencies, and timelines, to ensure engagements are delivered optimally and on time while meeting the business objectives Working closely engineering, product management, and support teams to identify feature improvements, extensions, and product defects. Facilitate feedback from field back to the product. Engaging with the Elastic Sales team to scope opportunities while assessing technical risks, questions, or concerns Be a mentor to your team members. What You Bring Bachelor’s, Master’s or PhD in Computer Science or related engineering field preferred, or equivalent combination of education, training, and experience. Minimum 5 years as a consultant, engineer or architect. Experiences in time series data ingestion. End to End Ingestion methods (Agent, Beats, and Logstash). Familiarity with messaging queues (Kafka, Redis). Experiences in Ingest optimization, data streams and sharding strategy. Experiences in Ingest lag analysis and improvement. Knowledge of Elastic Common Schema, data parsing and normalization. Enable customer to adapt Elastic Observability Solution and related OOTB features. Design and Build custom visual artifacts and understanding of key critical metrics that make valuable contributions to your customer. Identify thresholds for alerting. Familiarity with Fleet and agent installation policies, and scalability considerations. Knowledge in deploying enterprise observability (Metrics and Logs) solutions at scale (Application performance monitoring (APM), User experience monitoring (UEM), Infrastructure optimization, Network visibility and monitoring). Experience leading observability projects at both the architectural and program level. Experience working with monitoring tools that integrate into service management. Experience working to deliver and complete professional services engagements. Experience as a public speaker to large audiences on enterprise infrastructure software technology to engineers, developers, and other technical positions. Hands-on experience and an understanding of Elasticsearch and/or Lucene. Excel at working directly with customers to gather, prioritize, plan and implement solutions to customer business requirements as it relates to our technologies. Understanding and passion for open-source technology and knowledge and proficiency in at least one programming language. Strong hands-on experience with large distributed systems and application infrastructure from an architecture and development perspective. Knowledge of information retrieval and/or analytics domain. Understanding and/or certification in one or more of the following technology Kubernetes, Linux, Java and databases, Docker, Amazon Web Service (AWS), Azure, Google Cloud (GCP), Kafka, Redis, VM’s, Lucene. Occasional travel up to 20% Bonus Points: Big 4 Experience Deep understanding of our product, including Elastic Certified Engineer certification Comfortable with Ansible, JavaScript, Terraform ECK experience or Kubernetes Knowledge of machine learning and Artificial Intelligence (AI) Proven understanding of Java and Linux/Unix environment, software development, and/or experience with distributed systems Experience and curiosity about delivering and/or developing product training Experience contributing to an open-source project or documentation Additional Information - We Take Care Of Our People As a distributed company, diversity drives our identity. Whether you’re looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life. Your age is only a number. It doesn’t matter if you’re just out of college or your children are; we need you for what you can do. We strive to have parity of benefits across regions and while regulations differ from place to place, we believe taking care of our people is the right thing to do. Competitive pay based on the work you do here and not your previous salary Health coverage for you and your family in many locations Ability to craft your calendar with flexible locations and schedules for many roles Generous number of vacation days each year Increase your impact - We match up to $2000 (or local currency equivalent) for financial donations and service Up to 40 hours each year to use toward volunteer projects you love Embracing parenthood with minimum of 16 weeks of parental leave Different people approach problems differently. We need that. Elastic is an equal opportunity employer and is committed to creating an inclusive culture that celebrates different perspectives, experiences, and backgrounds. Qualified applicants will receive consideration for employment without regard to race, ethnicity, color, religion, sex, pregnancy, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, disability status, or any other basis protected by federal, state or local law, ordinance or regulation. We welcome individuals with disabilities and strive to create an accessible and inclusive experience for all individuals. To request an accommodation during the application or the recruiting process, please email candidate_accessibility@elastic.co. We will reply to your request within 24 business hours of submission. Applicants have rights under Federal Employment Laws, view posters linked below: Family and Medical Leave Act (FMLA) Poster; Pay Transparency Nondiscrimination Provision Poster; Employee Polygraph Protection Act (EPPA) Poster and Know Your Rights (Poster) Elasticsearch develops and distributes encryption software and technology that is subject to U.S. export controls and licensing requirements for individuals who are located in or are nationals of the following sanctioned countries and regions: Belarus, Cuba, Iran, North Korea, Russia, Syria, the Crimea Region of Ukraine, the Donetsk People’s Republic (“DNR”), and the Luhansk People’s Republic (“LNR”). If you are located in or are a national of one of the listed countries or regions, an export license may be required as a condition of your employment in this role. Please note that national origin and/or nationality do not affect eligibility for employment with Elastic. Please see here for our Privacy Statement. Different people approach problems differently. We need that. Elastic is an equal opportunity/affirmative action employer committed to diversity, equity, and inclusion. Qualified applicants will receive consideration for employment without regard to race, ethnicity, color, religion, sex, pregnancy, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, disability status, or any other basis protected by federal, state or local law, ordinance or regulation. We welcome individuals with disabilities and strive to create an accessible and inclusive experience for all individuals. To request an accommodation during the application or the recruiting process, please email candidate_accessibility@elastic.co We will reply to your request within 24 business hours of submission. Applicants have rights under Federal Employment Laws, view posters linked below: Family and Medical Leave Act (FMLA) Poster; Equal Employment Opportunity (EEO) Poster; and Employee Polygraph Protection Act (EPPA) Poster. Please see here for our Privacy Statement. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
India
On-site
Description EGNYTE YOUR CAREER. SPARK YOUR PASSION. Role Egnyte is a place where we spark opportunities for amazing people. We believe that every role has meaning, and every Egnyter should be respected. With 22 ,000 + customers worldwide and growing, you can make an impact by protecting their valuable data. When joining Egnyte, you’re not just landing a new career, you become part of a team of Egnyters who doers, thinkers, and collaborators are who embrace and live by our values: Invested Relationships Fiscal Prudence Candid Conversations About Egnyte Egnyte is the secure multi-cloud platform for content security and governance that enables organizations to better protect and collaborate on their most valuable content. Established in 2008, Egnyte has democratized cloud content security for more than 22,000 organizations, helping customers improve data security, maintain compliance, prevent and detect ransomware threats, and boost employee productivity on any app, any cloud, anywhere. For more information, visit www.egnyte.com . The Opportunity We are looking for a motivated C++ Engineer to join our Windows Desktop team. If you want to contribute your enthusiasm to the development of a global product with an impressive client base, do reach out! By joining our team, you will work directly with SW developers, QA engineers, Product Owners as well as UI/UX designers. We work according to the agile methodology, and we consider reliability and performance as the main focus areas to deliver business value to our customers around the globe. What You’ll Do Developing client application throughout all phases of the product lifecycle Own, improve, maintain and enhance code of a desktop application for Windows, one of the primary access points for users to Egnyte’s cloud-based solution Influence features, and implementation of our product Collaborate with other developers, product owners, and QA in multicultural, geographically distributed teams across multiple time zones Your Qualifications Bachelor’s or Master’s degree in Computer Science or a related field. 5+ years of software engineering experience in modern C++ programming. Experience in Windows development: WinAPI, .NET API, WPF, and PowerShell. Understanding Windows concepts like processes, multithreading, registry and system privileges. Understanding of filesystem concepts, like: file types, permissions, atomicity, journaling, caching . Knowledge of tools like ProcMon, WinDBG, Visual Studio Profiler, PerfView, Wireshark and Postman Hands-on experience in the development and maintenance of multithreaded and multiprocess applications for Windows Proven hands-on experience with Agile methodologies, Git, CI/CD pipelines, and TDD Nice To Have Experience with COM, WMI, UWP, WinUI, Windows kernel drivers, Windows installer (MSI), virtualization technologies hosting Windows OS, Azure platform Experience in networking protocols and standards: HTTP, TLS, W3C, OWASP, network certificates management and network diagnostics Expertise in PowerShell scripting for automation Experience with monitoring tools like Grafana, ELK Stack (Elasticsearch, Logstash, Kibana) Hands-on experience in programming and using Jenkins Understanding of REST API principles and experience in developing or integrating RESTful services Benefits Competitive salaries Medical insurance and healthcare benefits for you and your family Fully paid premiums for life insurance Flexible hours and PTO Mental wellness platform subscription Gym reimbursement Childcare reimbursement Group term life insurance Commitment To Diversity, Equity, And Inclusion At Egnyte, we celebrate our differences and thrive on our diversity for our employees, our products, our customers, our investors, and our communities. Egnyters are encouraged to bring their whole selves to work and to appreciate the many differences that collectively make Egnyte a higher-performing company and a great place to be. Show more Show less
Posted 1 week ago
5.0 - 10.0 years
5 - 9 Lacs
Pune
Work from Office
Job ID: 199107 Required Travel :Minimal Managerial - No Location: :India- Pune (Amdocs Site) In one sentence Responsible for providing outstanding technical support to a global customer base. Keeps ownership for the resolution of complex technical problems, including debugging, simulations, locating bugs, tool and script development for problem diagnosis, troubleshooting and reproduction. All you need is... Bachelor s degree in Computer Science/Information Technology or equivalent. 5+ years experience as a Software Support specialist. Experience in unix, database & shell scripting. Should have an experience in Kubernetes, Kibana. Should have a knowledge in Team Management. What will your job look like Investigate, debug and reproduce issues, provide fixes and workarounds, and verify changes to ensure continued operability of the software solution. Analyse production issues from the business and application perspective and outlines corrective actions. Technical focal point with other teams to resolve cross product/solution issues. Ownership and accountability of specific modules within an application and provide technical support and mentorship in problem resolution for complex issues. Bring continuous improvements/efficiencies to software or business processes by utilizing Software Engineering tools, various innovations and techniques and the reuse of existing solutions. Contribute in meeting various SLA s and critical metrics to guarantee that tasks are completed on time and the delivery timelines meet the quality targets of the organization. Onboard new employees and train them on processes and collaboration with team members. Take active role in team building, including technical mentoring and knowledge transfer. Partner with internal/external customers to improve the understanding of customer problems and verifies that an appropriate resolution has been applied. Why you will love this job: Get a chance to gain valuable experience and wide knowledge of Software integrative systems! Get the opportunity to be exposed to advance marked technologies and working with multi channels and divers area of expertise! Who are we Amdocs helps those who build the future to make it amazing. With our market-leading portfolio of software products and services, we unlock our customers innovative potential, empowering them to provide next-generation communication and media experiences for both the individual end user and enterprise customers. Our employees around the globe are here to accelerate service providers migration to the cloud, enable them to differentiate in the 5G era, and digitalize and automate their operations. Listed on the NASDAQ Global Select Market, Amdocs had revenue of $5.00 billion in fiscal 2024. For more information, visit www.amdocs.com Amdocs is an equal opportunity employer. We welcome applicants from all backgrounds and are committed to fostering a diverse and inclusive workforce
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About this opportunity: This position plays a crucial role in the development of Python-based solutions, their deployment within a Kubernetes-based environment, and ensuring the smooth data flow for our machine learning and data science initiatives. The ideal candidate will possess a strong foundation in Python programming, hands-on experience with ElasticSearch, Logstash, and Kibana (ELK), a solid grasp of fundamental Spark concepts, and familiarity with visualization tools such as Grafana and Kibana. Furthermore, a background in ML Ops and expertise in both machine learning model development and deployment will be highly advantageous. What you will do: Python Development: Write clean, efficient, and maintainable Python code to support data engineering tasks, including data collection, transformation, and integration with machine learning models. Data Pipeline Development: Design, develop, and maintain robust data pipelines that efficiently gather, process, and transform data from various sources into a format suitable for machine learning and data science tasks using ELK stack, Python and other leading technologies. Spark Knowledge: Apply basic Spark concepts for distributed data processing when necessary, optimizing data workflows for performance and scalability. ELK Integration: Utilize ElasticSearch, Logstash, and Kibana (ELK) for data management, data indexing, and real-time data visualization. Knowledge of OpenSearch and related stack would be beneficial. Grafana and Kibana: Create and manage dashboards and visualizations using Grafana and Kibana to provide real-time insights into data and system performance. Kubernetes Deployment: Deploy data engineering solutions and machine learning models to a Kubernetes-based environment, ensuring security, scalability, reliability, and high availability. What you will Bring: Machine Learning Model Development: Collaborate with data scientists to develop and implement machine learning models, ensuring they meet performance and accuracy requirements. Model Deployment and Monitoring: Deploy machine learning models and implement monitoring solutions to track model performance, drift, and health. Data Quality and Governance: Implement data quality checks and data governance practices to ensure data accuracy, consistency, and compliance with data privacy regulations. MLOps (Added Advantage): Contribute to the implementation of MLOps practices, including model deployment, monitoring, and automation of machine learning workflows. Documentation: Maintain clear and comprehensive documentation for data engineering processes, ELK configurations, machine learning models, visualizations, and deployments. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 766745 Show more Show less
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Pune
Work from Office
5+ yrs of ELK stack (Elasticsearch, Logstash, Kibana) Expertise in Logstash pipelines, Beats, & data transformation Skilled in Kibana dashboards & Elasticsearch query DSL Scripting knowledge (Python, Shell) Proficient in JSON, YAML, REST APIs
Posted 1 week ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bangalore,Karnataka,India Job ID 766747 About this opportunity: This position plays a crucial role in the development of Python-based solutions, their deployment within a Kubernetes-based environment, and ensuring the smooth data flow for our machine learning and data science initiatives. The ideal candidate will possess a strong foundation in Python programming, hands-on experience with ElasticSearch, Logstash, and Kibana (ELK), a solid grasp of fundamental Spark concepts, and familiarity with visualization tools such as Grafana and Kibana. Furthermore, a background in ML Ops and expertise in both machine learning model development and deployment will be highly advantageous What you will do: Generative AI & LLM Development, 12-15 Yrs of experience as Enterprise Software Architect with strong hands-on experience Strong hands-on experience in Python and microservice architecture concepts and development Expertise in crafting technical guides, architecture designs for AI platform Experience in Elastic Stack , Cassandra or any Big Data tool Experience with advance distributed systems and tooling, for example, Prometheus, Terraform, Kubernetes, Helm, Vault, CI/CD systems. Prior experience to build multiple AI/ML based models and deployed the models into production environment and creating the data pipelines Experience in guiding teams working on AI, ML, BigData and Analytics Strong understanding of development practices like architecture design, coding, test and verification. Experience with delivering software products, for example release management, documentation What you will Bring: Python Development: Write clean, efficient, and maintainable Python code to support data engineering tasks, including data collection, transformation, and integration with machine learning models. Data Pipeline Development: Design, develop, and maintain robust data pipelines that efficiently gather, process, and transform data from various sources into a format suitable for machine learning and data science tasks using ELK stack, Python and other leading technologies. Spark Knowledge: Apply basic Spark concepts for distributed data processing when necessary, optimizing data workflows for performance and scalability. ELK Integration: Utilize ElasticSearch, Logstash, and Kibana (ELK) for data management, data indexing, and real-time data visualization. Knowledge of OpenSearch and related stack would be beneficial. Grafana and Kibana: Create and manage dashboards and visualizations using Grafana and Kibana to provide real-time insights into data and system performance. Kubernetes Deployment: Deploy data engineering solutions and machine learning models to a Kubernetes-based environment, ensuring security, scalability, reliability, and high availability. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 1 week ago
10.0 - 15.0 years
7 - 12 Lacs
Kochi
Work from Office
As a Software Development Manager, you’ll manage software development, enhance product experiences, and scale our team’s capabilities. You’ll manage careers, streamline hiring, collaborate with product, and drive innovation. We seek proactive professionals passionate about team growth, software architecture, coding, and process enhancements. Mastery of frameworks, deployment tech, and cloud APIs is essential as well as adaptability to innovative technologies. We are seeking a Development Manager to join our Connector Mission leadership team in IBM Software Product Development organization and under the product, IBM App Connect. IBM® App Connect instantly connects applications and data from existing systems and modern technologies across all environments. App Connect offers enterprise service bus (ESB) and agile integration architecture (AIA) microservices deployment of integration artifacts, allowing businesses to deploy to a multitude of flexible integration patterns. Development Managers with agile product development experience in cloud native or OCP native web-based products and managed services are desired. Your primary responsibilities include: Solutions Development: Lead the development of innovative solutions to enhance our product and development experience, effectively contributing to making our software better. Team Growth and Management: Manage the career growth of team members, scale hiring and development processes, and foster a culture of continuous improvement within the team. Strategic Partnership: Partner with product teams to brainstorm ideas and collaborate on delivering an exceptional product, contributing to the overall success of the organization. Technical Direction: Provide technical guidance by actively participating in architectural discussions, developing code, and advocating for new process improvements to drive innovation and efficiency. As a Software Development Manager,you: Are experiencedwith client-server architectures, networking protocols, application development, and using databases. Have hands on experience in Application Development Are experienced in People Management Are experienced in Product Delivery, Support and Maintenance Have experience using and developing APIs. Understand user and system requirements Have an understanding of, or experience with,Agile development methodology. What You’ll Do: You’ll work inadynamic, collaborative environment to understand requirements, design, code and test innovative applications, and support those applications for our highly valued customers. You’ll employ IBM’s Design Thinking to create products that provide a great user experience along with high performance, security, quality, and stability. Design and code services, applications and databases that are reusable, scalable and meet critical architecture goals. Create Application Programming Interfaces (APIs) that are clean, well-documented, and easy to use. Create and configure Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) applications.Design and implement large scale systems and Service Oriented Architectures (SOA) that enable continuous delivery. Manage a team of approximately 15 software engineers Collaborate with our development, devops and leadership teams worldwide Who You Are: You are highly motivated and have a passion for creating and supporting greatproducts You thrive on collaboration, working side by side with people of all backgrounds and disciplines, and you have very strong verbal and written communication skills. You are great at solving problems, debugging, troubleshooting, designing and implementing solutions to complex technical issues. You have a solid understanding of software development and programming languages. You have the ability to learn new skills quickly and use the skills efficiently. Required education Bachelor's Degree Required technical and professional expertise Overall 10+ years of Industry experience and 5+ years of experience in leading teams as people manager Experience with Docker and container orchestration technologies such as Open Shift Container Platform (OCP), Kubernetes Familiarity with cloud-based providersIBM Cloud, AWS, Azure, google compute, etc. and their hosting tools and APIs Experience working with and developing APIs Experience working with operating systems (Linux, RedHat Open Shift...etc.). Familiarity with various Cloud and DB technologiesDocker, Kubernetes, Elasticsearch, Logstash, Kibana, CouchDB, Cassandra, and Postgres Experience in full stack development working with servers, applications and databases using Node.js, JavaScript, React.js, etc. Experience in delivery via Agile Methodology Experience in Product Development, Maintenance and Support Experience in Customer Support and managing escalations Preferred technical and professional experience Solid experience with OCP Native containers Scripting and deployment topology knowledgepython, shell, ansible, chef, puppet, etc Monitoring workloads through clouds (New Relic, Sysdig, Elasticsearch, Logstash, and Kibana) Cloud concepts around Auto-scale and auto-recover cloud components General IT security standards, principles, and compliances (ISO27k, SOC2, GDPR, PCI, etc.) Familiar with continuous delivery and CI/CD technologiesArgoCD, Terraform, etc. Familiar with RPA or AI technologies
Posted 1 week ago
5.0 - 10.0 years
19 - 22 Lacs
Pune
Work from Office
Job Description We are looking for an ambitious and highly skilled Go Developer who is passionate about building high-performance, scalable backend systems. This role is perfect for someone who thrives on solving complex engineering challenges, enjoys working with modern development practices, and takes ownership of delivering impactful solutions. You will be part of a dynamic team where innovation, collaboration, and continuous improvement are not just encouraged they are expected. If you were eager to make a meaningful contribution to real-world systems in a fast-paced environment. Skill / Qualifications Bachelor's degree in Computer Science, Engineering, or related technical field 5+ years of hands-on backend development experience Strong programming expertise in Golang Hands-on experience with MongoDB, OracleDB, and Snowflake Proficiency in using Logstash, Elasticsearch, and Splunk (Queries, Alerts, Dashboards) Experience in writing and maintaining scripts for automation and monitoring Familiarity with containerization and orchestration using Docker and Kubernetes Proficient in using Kafka for messaging and stream processing Comfortable working with GitLab for version control and CI/CD pipelines Experience handling incident alerts and escalations via PagerDuty Job Responsibilities Participate in daily stand-ups, code reviews, and sprint planning Review code and tickets to ensure high-quality development practices Design technical specifications for databases and APIs Plan and execute production deployments reliably and efficiently Provide Level 2 on-call support via PagerDuty for escalated incidents Collaborate with cross-functional teams including QA, DevOps, and product stakeholders Ensure effective incident response and root cause analysis for production issues Benefits Competitive Market Rate (Depending on Experience)
Posted 1 week ago
5.0 years
0 Lacs
Hyderābād
On-site
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role As an IT Solutions professional, you'll serve as the key technical leader, guiding systems management specialists and internal teams through complex challenges. You'll be the trusted expert that customers and Kyndryl account teams turn to when they need insight, technical guidance, or support during major incidents and critical technical discussions. With your expertise, you’ll assess customers’ IT environments, identify any technological gaps, and develop tailored remediation plans that elevate their operational capabilities. Your recommendations will be pivotal in helping businesses evolve and stay ahead in the digital landscape. In this role, you'll lead the charge during recovery and restoration efforts, ensuring that progress is communicated effectively to stakeholders, from management to customer-facing teams. You'll track each action with precision, applying your diagnostic and troubleshooting skills to resolve issues efficiently. Key Responsibilities: Architect, deploy, and optimize Elastic Observability stack to support full-fidelity telemetry collection. Implement APM, Logs, Metrics, and Uptime Monitoring using Elastic and OpenTelemetry standards. Design Elastic index templates, ILM policies, ingest pipelines, and dashboards tailored to enterprise needs. Collaborate with infra, app, and DevOps teams to onboard apps and services into observability pipelines. Integrate Elastic with third-party tools (e.g., Zabbix, Prometheus, OpenTelemetry Collector). Tune performance and storage strategy for high-scale ingestion environments (50+ apps, 500+ servers). Create SOPs, runbooks, and dashboards for observability operations. Provide guidance on cost optimization, licensing, and scaling models for Elastic deployments. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. You’ll have access to data, hands-on learning experiences, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find opportunities here that you won’t find anywhere else. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Expertise 5+ years of hands-on experience with the Elastic Stack (Elasticsearch, Kibana, Logstash/Beats). Strong knowledge of Elastic APM, Fleet, and integrations with OpenTelemetry and metric sources. Experience with data ingest and transformation using Logstash, Filebeat, Metricbeat, or custom agents. Proficiency in designing dashboards, custom visualizations, and alerting in Kibana. Experience working with Kubernetes, Docker, and Linux systems. Understanding of ILM, hot-warm-cold tiering, and Elastic security controls. Preferred Technical and Professional Experience Exposure to Elastic Cloud, ECE, or ECK. Familiarity with alternatives like Dynatrace, Datadog, AppDynamics or SigNoz for benchmarking Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 week ago
5.0 years
0 Lacs
Hyderābād
On-site
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role In Systems Management at Kyndryl, you will be critical in ensuring the smooth operation of our customers’ IT infrastructure. You'll be the mastermind behind maintaining and optimizing their systems, ensuring they're always running at peak performance. Key Responsibilities: Develop and customise OpenTelemetry Collectors to support platform-specific instrumentation (Linux, Windows, Docker, Kubernetes). Build processors, receivers, and exporters in OTEL to align with Elastic APM data schemas. Create robust and scalable pipelines for telemetry data collection and delivery to Elastic Stack. Work closely with platform and application teams to enable auto-instrumentation and custom telemetry. Automate deployment of collectors via Ansible, Terraform, Helm, or Kubernetes operators. Collaborate with Elastic Observability team to validate ingestion formats, indices, and dashboard readiness. Benchmark performance and recommend cost-effective designs. Your Future at Kyndryl Kyndryl's focus on providing innovative IT solutions to its customers means that in Systems Management, you will be working with the latest technology and will have the opportunity to learn and grow your skills. You may also have the opportunity to work on large-scale projects and collaborate with other IT professionals from around the world. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Expertise 5+ years of experience with Golang or similar languages in a systems development context. Deep understanding of OpenTelemetry Collector architecture, pipelines, and customization. Experience with Elastic APM ingestion endpoints and schema alignment. Familiarity with Docker, Kubernetes, system observability (eBPF optional but preferred). Hands-on with deployment automation tools: Ansible, Terraform, Helm, Kustomize. Strong grasp of telemetry protocols: OTLP, gRPC, HTTP, and metrics formats like Prometheus, StatsD. Strong knowledge of Elastic APM, Fleet, and integrations with OpenTelemetry and metric sources. Experience with data ingest and transformation using Logstash, Filebeat, Metricbeat, or custom agents. Proficiency in designing dashboards, custom visualizations, and alerting in Kibana. Understanding of ILM, hot-warm-cold tiering, and Elastic security controls. Preferred Technical and Professional Experience Contributions to OpenTelemetry Collector or related CNCF projects. Elastic Observability certifications or demonstrable production experience. Experience in cost modeling and telemetry data optimization. Exposure to Elastic Cloud, ECE, or ECK. Familiarity with alternatives like Dynatrace, Datadog, AppDynamics or SigNoz for benchmarking. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 week ago
2.0 - 12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are looking for smart automation QA engineers who will help us verify and validate this large-scale distributed application. As part of the product QA, you will work on the product automation, tools development and setting high benchmarks for the product quality. This will require a broad top-down understanding of the Tarana Cloud Suite, its components (microservices, UI, databases, etc.), data and control flows between them, runtime infrastructure in the AWS cloud, and network communications with the radio devices. This also means understanding the runtime behaviour of the application, performance implications, infrastructure tuning and capacity management, etc. Job Responsibilities: Understand the Tarana Cloud Suite architecture and gain expertise in all the components involved Design, develop and enhance an existing python-based automation framework Create UI test scripts using Selenium (or equivalent tools) Automate system, scale and performance tests (REST APIs) Attend feature design meetings, design comprehensive test plans, create test cases and execute tests Review test plans/test code written by other team members Own regression test areas and ensure that the product quality is intact despite code churn Identify defects and own them through the resolution and verification cycle Ensure effective communication of project and testing status to all stakeholders Debug problems reported by the customers and reproduce the same in lab environments. Own regression test areas and ensure that the product quality is intact despite of code churn Contribute to development of tools that enhance test efficiency/management Identify defects and own them through the resolution and verification cycle Required Skills & Experience: Bachelor’s/Master's degree in computer science or closely related disciplines 2 - 12 years of QA automation experience using Python on microservices-based SaaS products deployed on cloud platforms like AWS, GCP Good understanding of Object-Oriented Design methodology Experience in functional and system level testing of APIs (familiarity with REST API, JSON, XML) and Frontend (UI) Experience of Selenium(or equivalent tools) for UI automation Good understanding of QA processes and Agile methodologies Exposure to Linux and understanding of Docker containers, VMs etc Knowledge of Networking concepts like switching and routing Knowledge on ELK (Elasticsearch, Logstash and Kibana), Kafka etc Excellent debugging/analytical skills with a focus on solving problems Experience of Selenium(or equivalent tools) for UI automation Experience with Jira and Confluence Since our founding in 2009, we’ve been on a mission to accelerate the pace of bringing fast and affordable internet access — and all the benefits it provides — to the 90% of the world’s households who can’t get it. Through a decade of R&D and more than $400M of investment, we’ve created an entirely unique next-generation fixed wireless access technology, powering our first commercial platform, Gigabit 1 (G1). It delivers a game-changing advance in broadband economics in both mainstream and underserved markets, using either licensed or unlicensed spectrum. G1 started production in mid 2021 and has now been installed by over 160 service providers globally. We’re headquartered in Milpitas, California, with additional research and development in Pune, India. G1 has been developed by an incredibly talented and pioneering core technical team. We are looking for more world-class problem solvers who can carry on our tradition of customer obsession and ground-breaking innovation. We’re well funded, growing incredibly quickly, maintaining a superb results-focused culture while we’re at it, and all grooving on the positive difference we are making for people all over the planet. If you want to help make a real difference in this world, apply now! Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Senior Software Development Engineer 2 (SDE-2) Location: Noida Sec-62 Job Type: Full-Time Experience Required: 3+ Years About the Role We are looking for an experienced and highly motivated Senior Software Development Engineer 2 (SDE-2) to join our backend engineering team. In this role, you will be responsible for architecting and delivering scalable, high-performance backend solutions. You will provide technical leadership, mentor junior engineers, and collaborate across teams to deliver robust, reliable, and efficient software systems. Key Responsibilities Lead the design, development, and deployment of scalable backend applications and services. Architect complex microservices-based solutions that are maintainable, reliable, and performant. Set technical direction and best practices for the team; mentor and support junior engineers. Build and maintain efficient, secure, and well-documented APIs. Drive end-to-end CI/CD processes using Docker and Jenkins. Deploy and manage containerized applications with Kubernetes, ensuring optimal performance and scalability. Monitor and optimize system performance using tools like Datadog; proactively identify and resolve issues. Work extensively with MongoDB, Redis, and Kafka for advanced data management and real-time streaming. Conduct thorough code reviews and contribute to maintaining a high standard of code quality. Collaborate with cross-functional teams including product managers, architects, and QA to ensure timely and high-quality releases. Troubleshoot and resolve production issues efficiently, implementing preventative solutions. Stay updated with emerging technologies and continuously drive innovation within the team. Candidate Requirements Bachelor’s degree in Computer Science, Engineering, or a related field (Master’s degree is a plus). 3+ years of hands-on software development experience with strong backend architecture skills. Proficient in JavaScript and TypeScript. Extensive experience with Golang in production environments. Deep expertise in MongoDB, Redis, and Kafka, including performance tuning and troubleshooting. Hands-on experience with Docker, Jenkins, and CI/CD pipelines. Proficiency in Kubernetes for managing containerized applications at scale. Experience in Datadog or similar tools for system monitoring and optimization. Familiarity with cloud platforms like AWS, Azure, or GCP. Knowledge of the ELK Stack (Elasticsearch, Logstash, Kibana) is a plus. Strong problem-solving, leadership, and communication skills. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Job Description Your impact Support an industry-leading Engineering team with your skills in supporting complex Linux-based applications, service affecting performance analytics, and developing and executing automated and manual test cases. You will take system specifications from paper to a fully functional system with the ability to do what the system was designed to do. Then, continue to support that system through its whole life cycle. What You Will Do You will work with a team, and agile in your day-to-day priorities. You will demonstrate your ability to design, install, configure, upgrade, and troubleshoot both Linux operating systems and Linux-based applications. You will demonstrate your ability to deliver high-quality results in a fast-paced environment and be comfortable managing, manipulating, and summarizing large quantities of data. Use performance analytics to track, aggregate, and visualize key performance indicators over time for different areas of the network. Configuring Linux-Based applications collecting and analyzing data develop and execute performance load and stress scenarios and test cases. Create, test, and deploy procedures in production systems; create and perform scripts; provide troubleshooting support for architecture infrastructure and network engineering teams on the servers you are responsible for. Use troubleshooting procedures and technical information to identify the severity of incidents and carry out accordingly the resolution process; research issues that do not have a recognized solution and prepare a resolution process; maintain local documentation which gets updated on all features of servers. Perform analysis and/or troubleshooting of high visibility tasks or issues. Configuration of Complex Systems to create products and services Take ownership and administrative responsibilities for the equipment in the data centers that provide products and services for Commercial Aviation. You will create, modify, or improve shell and Python scripts to fix issues, enhance functionality, or update Linux-based applications. Your Required Experience/skills Bachelor’s degree in computer science, computer engineering, or related technologies. Seven years of experience in systems engineering within the networking industry. Expertise in Linux deployment, scripting and configuration. Expertise in TCP/IP communications stacks and optimizations Experience with ELK (Elasticsearch, Logstash, Kibana), Grafana data streaming (e.g., Kafka), and software visualization. Experience in analyzing and debugging code defects in the Production Environment. Proficiency in version control systems such as GIT. Ability to design comprehensive test scenarios for systems usability, execute tests, and prepare detailed reports on effectiveness and defects for production teams. Full-cycle Systems Engineering experience covering Requirements capture, architecture, design, development, and system testing. Demonstrated ability to work independently and collaboratively within cross-functional teams. Proficient in installing, configuring, debugging, and interpreting performance analytics to monitor, aggregate, and visualize key performance indicators over time. Proven track record of directly interfacing with customers to address concerns and resolve issues effectively. Strong problem-solving skills, capable of driving resolutions autonomously without senior engineer support. Experience in configuring MySQL and PostgreSQL, including setup of replication, troubleshooting, and performance improvement. Proficiency in networking concepts such as network architecture, protocols (TCP/IP, UDP), routing, VLANs, essential for deploying new system servers effectively. Proficiency in scripting language Shell/Bash, in Linux systems. Proficient in utilizing, modifying, troubleshooting, and updating Python scripts and tools to refine code. Excellent written and verbal communication skills. Ability to document processes, procedures, and system configurations effectively. Your success in this role will look like: Ability to Handle Stress and Maintain Quality. This includes resilience to effectively manage stress and pressure, as well as a demonstrated ability to make informed decisions, particularly in high-pressure situations. Excellent written and verbal communication skills. It includes the ability to document processes, procedures, and system configurations effectively. It is required for this role to be on-call 24/7 to address service-affecting issues in production. It is required to work during the business hours of Chicago, aligning with local time for effective coordination and responsiveness to business operations and stakeholders in the region. It would be nice if you had: Solid software development experience in the Python programming language, with the ability to understand, execute, and debug issues, as well as develop new tools using Python. Experience in design, architecture, traffic flows, configuration, debugging, and deploying Deep Packet Inspection (DPI) systems. Proficient in managing and configuring AAA systems (Authentication, Authorization, and Accounting). How we support you: Hybrid work environment offering up to two days per week work from home (for eligible positions) Development opportunities supporting professional growth championed by our dedicated Learning & Development team. 20-25% of our positions are hired internally! Ways to get involved: satellite launch parties, company connect events, charitable activities, team social events and recognition programs. Wide range of benefits and perks to help you stay healthy, happy, and productive. These include paid leave programs, medical, tuition reimbursement, and retirement benefits, employee wellness offerings, and more! These benefits are designed to support your overall well-being and help you succeed in your role. Equal Employment Opportunity Intelsat is an equal opportunity employer and does not discriminate based upon race, color, religion, sex, national origin, ethnicity, age, disability, pregnancy, veteran status, sexual orientation, gender identity or any other characteristic protected by applicable law . While it is important to note that meeting the minimum qualifications is a fundamental requirement for consideration, if you are enthusiastic about this role and are unsure how well your experience aligns with these requirements, we encourage you to apply. Our recruitment team will assess your application and determine if your skills and qualifications meet the essential criteria for this role or whether there might be another role within our organization that is a better match. About Us As the foundational architects of satellite technology, Intelsat applies our expertise to develop breakthrough solutions that advance and secure boundless applications for our customers and partners. At Intelsat, we increase human potential by connecting people, communities, businesses, and governments. Our employees enjoy a casual and collaborative environment, where we celebrate professional excellence in pursuit of the corporate mission. We hire skilled professionals who work in various areas such as: satellite engineering, network operations, cloud architecture, accounting, sales, legal, and more. Browse our current job openings or create a professional profile to stay informed about opportunities that match your interests and expertise. Intelsat is subject to regulation by certain U.S. Government national security agencies, which require that we collect and share certain Personally Identifiable Information (“PII”) with the U.S. Government to obtain permission to employ non-U.S. persons in certain roles. If selected for a role at Intelsat, we may collect and share your PII for these purposes. Intelsat is an Equal Opportunity Employer Show more Show less
Posted 1 week ago
12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Success in the role requires agility and results orientation, strategic and innovative thinking, a proven track record of delivering new customer-facing software products at scale, rigorous analytical skills, and a passion for automation and data-driven approaches to solving problems. Director Of ECommerce Engineering - Responsibilities Leadership and Delivery Oversee and lead the engineering project delivery for the ECommerce Global Multi-Tenant Platform, ensuring high availability, scalability, and performance to support global business operations. Define and execute the engineering strategy, aligning with the company’s business goals and long-term vision for omnichannel retail. Ensure high-quality deliverables by establishing robust processes for code reviews, testing, and deployment. Cross-Functional Collaboration Actively collaborate with Product Management, Business Stakeholders, and other Engineering Teams to define project requirements and deliver customer-centric solutions. Serve as a key point of contact for resolving technical challenges and ensuring alignment between business needs and technical capabilities. Promote seamless communication between teams to deliver cross-functional initiatives on time and within budget. Talent Acquisition and Development Build a strong and diverse engineering team by attracting, recruiting, and retaining top talent. Design and implement a robust onboarding program to ensure new hires are set up for success. Coach team members to enhance technical expertise, problem-solving skills, and leadership abilities, fostering a culture of continuous learning and improvement. Maintain a strong pipeline of talent by building relationships with local universities, engineering communities, and industry professionals. Performance Management Define clear, measurable goals for individual contributors and teams, ensuring alignment with broader organizational objectives. Conduct regular one-on-one meetings to provide personalized feedback, career guidance, and development opportunities. Manage performance reviews and recognize high-performing individuals, while providing coaching and support to those needing improvement. Foster a culture of accountability, where team members take ownership of their work and deliver results. Technology Leadership Champion the adoption of best practices in software engineering, including agile methodologies, DevOps, and automation. Facilitate and encourage knowledge sharing and expertise in critical technologies, such as cloud computing, microservices, and AI/ML. Evaluate and introduce emerging technologies that align with business goals, driving innovation and competitive advantage. Continuous Education and Domain Expertise Develop and execute a continuous education program to upskill team members on both key technologies and the Williams-Sonoma business domain. Organize training sessions, workshops, and certifications to keep the team updated on the latest industry trends. Encourage team members to actively participate in tech conferences, hackathons, and seminars to broaden their knowledge and network. Resource Planning and Execution Accurately estimate development efforts for projects, taking into account complexity, risks, and resource availability. Develop and implement project plans, timelines, and budgets to deliver initiatives on schedule. Oversee system rollouts and implementation efforts, ensuring smooth transitions and minimal disruptions to business operations. Optimize resource allocation to maximize team productivity and ensure proper workload distribution. Organizational Improvement Champion initiatives to improve the engineering organization’s culture, focusing on collaboration, transparency, and inclusivity. Continuously evaluate and refine engineering processes to increase efficiency and reduce bottlenecks. Promote team well-being by fostering a positive and supportive work environment where engineers feel valued and motivated. Lead efforts to make the organization a "Great Place to Work", including regular engagement activities, mentorship programs, and open communication. System Understanding and Technical Oversight Develop a deep understanding of critical systems and processes, including platform architecture, APIs, data pipelines, and DevOps practices. Provide technical guidance to the team, addressing complex challenges and ensuring alignment with architectural best practices. Partner with senior leaders to align technology decisions with business priorities and future-proof the company’s systems. Innovation and Transformation Play a pivotal role in transforming Williams-Sonoma into a leading technology organization by implementing cutting-edge solutions in eCommerce, Platform Engineering, AI, ML, and Data Science. Drive the future of omnichannel retail by conceptualizing and delivering innovative products and features that enhance customer experiences. Actively represent the organization in the technology community, building a strong presence through speaking engagements, partnerships, and contributions to open-source projects. Identify opportunities for process automation and optimization to improve operational efficiency. Additional Responsibilities Be adaptable to perform other duties as required, addressing unforeseen challenges and contributing to organizational goals. Stay updated on industry trends and competitive landscapes to ensure the company remains ahead of the curve. Criteria Experience and Expertise Extensive Industry Experience 12+ years of experience in developing and delivering eCommerce mobile applications and retail store solutions with multiple concurrent tracks of development and operations. Proven success in leading initiatives that drive business outcomes, scalability, and innovation in eCommerce platforms. Leadership and Team Management 5+ years of experience in building and managing medium-scale teams (10–20 team members) of engineers, technical leads, and managers. Demonstrated ability to optimize team performance, foster a culture of collaboration, and implement career development initiatives. Project Lifecycle Management Skilled in managing projects through the entire lifecycle, from concept and design to development, testing, deployment, and maintenance. Adept at balancing technical, business, and resource constraints to deliver high-quality outcomes. Technical And Professional Skills Project and Technical Leadership Strong project management skills with the ability to lead and mentor technical professionals. Proven experience in scoping, prioritizing, and delivering projects on time, within budget, and aligned with business objectives. Analytical and Decision-Making Skills Ability to systematically gather and analyze relevant data from diverse sources to address complex issues. Skilled in making prompt, insightful decisions under pressure and in ambiguous situations. Interpersonal And Communication Skills Business Relationships and Conflict Management Demonstrated ability to build trust-based business relationships across teams and external stakeholders. Proven capability to anticipate, mitigate, and resolve conflicts across workgroups to maintain team cohesion and productivity. Communication Excellence Strong verbal and written communication skills, with the ability to articulate complex ideas effectively to technical and non-technical audiences. Experienced in delivering engaging presentations to different audiences, including senior leadership and external partners. Interpersonal Effectiveness Exceptional interpersonal skills, including team collaboration, negotiation, and mentorship. A team player who values diverse perspectives, respects all individuals regardless of seniority, and actively contributes to team success. Operational And Organizational Skills Execution and Results Orientation Proven track record in developing and executing detailed plans, managing budgets, and delivering results under tight deadlines. Demonstrated ability to handle complex, fast-paced projects with competing priorities. Vendor and Stakeholder Management Skilled in negotiating and managing vendor relationships, contracts, and service-level agreements (SLAs). Self-Motivation and Independence Self-driven with the ability to work independently, take initiative, and proactively solve problems. Comfortable operating in ambiguous environments, making calculated decisions, and managing risks effectively. Educational Qualification Bachelor’s degree in Computer Science, Engineering, or a related field. Equivalent work experience will also be considered. Core Technical Criteria Backend Expertise (Java) Strong Java Knowledge: Expertise in Java frameworks such as Spring & Spring Boot In-depth understanding of RESTful API design, implementation, and optimization. Knowledge of microservices architecture and tools like Kubernetes, Docker, and API Gateway. Scalability and Performance: Experience in building scalable, high-performance backend systems to handle high traffic. Proficiency in tuning Java Virtual Machine (JVM) for optimal performance. Database Management: Hands-on experience with relational databases like Oracle, MySQL, PostgreSQL, and NoSQL databases like MongoDB, Cassandra, or Redis. Ability to optimize database queries and manage large datasets effectively. Frontend Expertise (Vue.js) Proficiency in Vue.js: Deep understanding of Vue.js components, Vuex (state management), Vue Router, and the ecosystem. Ability to optimize frontend code for performance, SEO, and user experience. Modern Web Development: Familiarity with JavaScript (ES6+), TypeScript, and tools like Webpack, Vite, or Rollup. Hands-on experience in responsive design, cross-browser compatibility, and progressive web apps (PWAs). Full Stack Knowledge Understanding of frontend-backend communication patterns, including REST APIs, WebSockets, and GraphQL. Ability to troubleshoot and resolve issues across the stack (frontend, backend, and database). Platform and Infrastructure Expertise Cloud and DevOps Proficiency in cloud platforms like AWS, Google Cloud Platform (GCP), or Azure. Experience with CI/CD pipelines using Jenkins, GitLab CI/CD, or equivalent tools. Familiarity with containerization (Docker) and orchestration tools like Kubernetes. Experience in implementing scalable, fault-tolerant architectures in a cloud environment. Security and Compliance In-depth knowledge of eCommerce security standards, including PCI DSS compliance for payment processing. Experience in implementing security best practices, such as authentication (OAuth2, SSO), encryption, and secure API design. Observability and Monitoring Knowledge of logging and monitoring tools like ELK Stack (Elasticsearch, Logstash, Kibana), Prometheus, Grafana, and Datadog. Experience in implementing robust error tracking and alerting mechanisms. Architectural Expertise System Design and Architecture Proven experience in designing and delivering eCommerce platforms that are scalable, reliable, and fault-tolerant. Knowledge of event-driven architectures using Kafka, RabbitMQ, or similar tools. Expertise in load balancing, caching strategies (e.g., CDN, Redis, Memcached), and database partitioning. API Management Experience designing and implementing secure, versioned, and scalable APIs for both internal and external integrations. Knowledge of API Gateway technologies and API rate-limiting strategies. Leadership and Team Management Technical Leadership Ability to guide the team in code reviews, setting coding standards, and adopting best practices in Java and Vue.js development. Hands-on experience in mentoring and growing engineering talent, specifically for eCommerce-focused teams. Collaboration with Product and UX Ability to collaborate with Product Managers and UX/UI Designers to align technical implementation with business goals and user experience. Experience in leading discussions on frontend performance optimization, UX responsiveness, and accessibility (WCAG standards). Emerging Technologies and Trends AI/ML and Personalization Knowledge of AI/ML-driven personalization engines and recommendations for eCommerce platforms (e.g., product recommendations, search optimization). Search and Catalog Optimization Expertise in search technologies such as Elasticsearch, Solr, or custom implementations for product catalogs. Other Must-Have Skills Performance Optimization Proven experience in load testing, stress testing, and optimizing eCommerce platforms to handle millions of transactions. Version Control and Collaboration Tools Expertise with Git workflows, and tools like GitHub, Bitbucket, or GitLab. Familiarity with Agile tools such as Jira & Confluence. About Us Founded in 1956, Williams-Sonoma Inc. is the premier specialty retailer of high-quality products for the kitchen and home in the United States. Today, Williams-Sonoma, Inc. is one of the United States' largest e-commerce retailers with some of the best known and most beloved brands in home furnishings. Our family of brands are Williams-Sonoma, Pottery Barn, Pottery Barn Kids, Pottery Barn Teen, West Elm, Williams-Sonoma Home, Rejuvenation, GreenRow and Mark and Graham. We currently operate retail stores globally. Our products are also available to customers through our catalogs and online worldwide. Williams-Sonoma has established a technology center in Pune, India to enhance its global operations. The India Technology Center serves as a critical hub for innovation and focuses on developing cutting-edge solutions in areas such as e-commerce, supply chain optimization, and customer experience management. By integrating advanced technologies like artificial intelligence, data analytics, and machine learning, the India Technology Center plays a crucial role in accelerating Williams-Sonoma's growth and maintaining its competitive edge in the global market Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction A career in IBM Software means you’ll be part of a team that transforms our customer’s challenges into solutions. Seeking new possibilities and always staying curious, we are a team dedicated to creating the world’s leading AI-powered, cloud-native software solutions for our customers. Our renowned legacy creates endless global opportunities for our IBMers, so the door is always open for those who want to grow their career. IBM’s product and technology landscape includes Research, Software, and Infrastructure. Entering this domain positions you at the heart of IBM, where growth and innovation thrive. Your Role And Responsibilities As a Software Development Manager, you’ll manage software development, enhance product experiences, and scale our team’s capabilities. You’ll manage careers, streamline hiring, collaborate with product, and drive innovation. We seek proactive professionals passionate about team growth, software architecture, coding, and process enhancements. Mastery of frameworks, deployment tech, and cloud APIs is essential as well as adaptability to innovative technologies. We are seeking a Development Manager to join our Connector Mission leadership team in IBM Software Product Development organization and under the product, IBM App Connect. IBM® App Connect instantly connects applications and data from existing systems and modern technologies across all environments. App Connect offers enterprise service bus (ESB) and agile integration architecture (AIA) microservices deployment of integration artifacts, allowing businesses to deploy to a multitude of flexible integration patterns. Development Managers with agile product development experience in cloud native or OCP native web-based products and managed services are desired. Your Primary Responsibilities Include Solutions Development: Lead the development of innovative solutions to enhance our product and development experience, effectively contributing to making our software better. Team Growth and Management: Manage the career growth of team members, scale hiring and development processes, and foster a culture of continuous improvement within the team. Strategic Partnership: Partner with product teams to brainstorm ideas and collaborate on delivering an exceptional product, contributing to the overall success of the organization. Technical Direction: Provide technical guidance by actively participating in architectural discussions, developing code, and advocating for new process improvements to drive innovation and efficiency. As a Software Development Manager, You Are experienced with client-server architectures, networking protocols, application development, and using databases. Have hands on experience in Application Development Are experienced in People Management Are experienced in Product Delivery, Support and Maintenance Have experience using and developing APIs. Understand user and system requirements Have an understanding of, or experience with, Agile development methodology. What You’ll Do: You’ll work in a dynamic, collaborative environment to understand requirements, design, code and test innovative applications, and support those applications for our highly valued customers. You’ll employ IBM’s Design Thinking to create products that provide a great user experience along with high performance, security, quality, and stability. Design and code services, applications and databases that are reusable, scalable and meet critical architecture goals. Create Application Programming Interfaces (APIs) that are clean, well-documented, and easy to use. Create and configure Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS) applications. Design and implement large scale systems and Service Oriented Architectures (SOA) that enable continuous delivery. Manage a team of approximately 15 software engineers Collaborate with our development, devops and leadership teams worldwide Who You Are: You are highly motivated and have a passion for creating and supporting great products You thrive on collaboration, working side by side with people of all backgrounds and disciplines, and you have very strong verbal and written communication skills. You are great at solving problems, debugging, troubleshooting, designing and implementing solutions to complex technical issues. You have a solid understanding of software development and programming languages. You have the ability to learn new skills quickly and use the skills efficiently. Required Technical And Professional Expertise Overall 10+ years of Industry experience and 5+ years of experience in leading teams as people manager Experience with Docker and container orchestration technologies such as Open Shift Container Platform (OCP), Kubernetes Familiarity with cloud-based providers: IBM Cloud, AWS, Azure, google compute, etc. and their hosting tools and APIs Experience working with and developing APIs Experience working with operating systems (Linux, RedHat Open Shift...etc.). Familiarity with various Cloud and DB technologies: Docker, Kubernetes, Elasticsearch, Logstash, Kibana, CouchDB, Cassandra, and Postgres Experience in full stack development working with servers, applications and databases using Node.js, JavaScript, React.js, etc. Experience in delivery via Agile Methodology Experience in Product Development, Maintenance and Support Experience in Customer Support and managing escalations Preferred Technical And Professional Experience Solid experience with OCP Native containers Scripting and deployment topology knowledge: python, shell, ansible, chef, puppet, etc Monitoring workloads through clouds (New Relic, Sysdig, Elasticsearch, Logstash, and Kibana) Cloud concepts around Auto-scale and auto-recover cloud components General IT security standards, principles, and compliances (ISO27k, SOC2, GDPR, PCI, etc.) Familiar with continuous delivery and CI/CD technologies: ArgoCD, Terraform, etc. Familiar with RPA or AI technologies Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Requirements & Capabilities CI/CD & Automation: Design, implement, and maintain CI/CD pipelines using tools like Jenkins, Azure DevOps and Argo CD. Automate build, test, and deployment processes to improve efficiency. Infrastructure as Code (IaC): Manage infrastructure using Terraform, Ansible. Ensure infrastructure is scalable, resilient, and version controlled. Cloud & Server Management: Deploy and manage applications on Azure, GCP, or on-prem servers. Optimize cloud costs and ensure security best practices. Containerization & Orchestration: Work with Docker and Kubernetes for containerized application deployment. Manage cluster scaling, networking, and security in K8s environments. Monitoring & Logging: Set up monitoring tools like Prometheus, ELK (Elasticsearch, Logstash, Kibana), Azure Monitor. Ensure proactive incident response with log analysis and alerting mechanisms. Security & Compliance: Implement security best practices in CI/CD, cloud, and server configurations. Manage role-based access controls (RBAC), secrets management, and vulnerability scanning. Scripting & Automation: Write scripts in Bash, Python, or Go for automation tasks. Optimize system performance through automated solutions. Required Skills Proficiency in CI/CD tools, Infrastructure as Code, cloud platforms, containerization, monitoring, and scripting. Preferred Skills Experience with Azure, GCP, Docker, Kubernetes, and security best practices. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
Job description. Azure DevOps Engineer Location : Pan India. Experience Range : 5-8yrs Responsibilities: Infrastructure as Code & Cloud Automation Design and implement Infrastructure as Code (IaC) using Terraform, Ansible, or equivalent for both Azure and on-prem environments. Automate provisioning and configuration management for Azure PaaS services (App Services, AKS, Storage, Key Vault, etc.). Manage Hybrid Cloud Deployments, ensuring seamless integration between Azure and on-prem alternatives. CI/CD Pipeline Development (Without Azure DevOps) Develop and maintain CI/CD pipelines using GitHub Actions or Jenkins. Automate containerized application deployment using Docker, Kubernetes (AKS). Implement canary deployments, blue-green deployments, and rollback strategies for production releases. Cloud Security & Secrets Management Implement role-based access control (RBAC) and IAM policies across cloud and on-prem environments. Secure API and infrastructure secrets using HashiCorp Vault (instead of Azure Key Vault). Monitoring, Logging & Observability Set up observability frameworks using Prometheus, Grafana, and ELK Stack (ElasticSearch, Kibana, Logstash). Implement centralized logging and monitoring across cloud and on-prem environments. Must Have Skills & Experience: Cloud & DevOps Azure PaaS Services: App Services, AKS, Azure Functions, Blob Storage, Redis Cache Kubernetes & Containerization: Hands-on experience with AKS, Kubernetes, Docker CI/CD Tools: Experience with GitHub Actions, Jenkins Infrastructure as Code (IaC): Proficiency in Terraform Security & Compliance IAM & RBAC: Experience with Active Directory, Keycloak, LDAP Secrets Management: Expertise in HashiCorp Vault or Azure Key Vault Cloud Security Best Practices: API security, network security, encryption Networking & Hybrid Cloud Azure Networking: Knowledge of VNets, Private Endpoints, Load Balancers, API Gateway, Nginx Hybrid Cloud Connectivity: Experience with VPN Gateway, Private Peering Monitoring & Performance Optimization Observability tools: Prometheus, Grafana, ELK Stack, Azure Monitor & App Insights Logging & Monitoring: Experience with ElasticSearch, Logstash, OpenTelemetry, Log Analytics Good to Have Skills & Experience: Experience with additional IaC tools (Ansible, Chef, Puppet) Experience with additional container orchestration platforms (OpenShift, Docker Swarm) Knowledge of advanced Azure services (e.g., Azure Logic Apps, Azure Event Grid) Familiarity with cloud-native monitoring solutions (e.g., CloudWatch, Datadog) Experience in implementing and managing multi-cloud environments Key Personal Attributes: Strong problem-solving abilities Ability to work in a fast-paced and dynamic environment Excellent communication skills and ability to collaborate with cross-functional teams Proactive and self-motivated, with a strong sense of ownership and accountability. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Senior Elastic Search Developer Location - Chennai/Bangalore/Hyderabad Role - Fulltime Years of Experience - 5+ Role & responsibilities * Experience with Elasticsearch - Mandatory. * Design and implement highly scalable ELK (ElasticSearch, Logstash, and Kibana) stack and ElastiCache solutions. * Build reports using APIs that leverage ElasticSearch and ElastiCache. * Good experience in query languages and writing complex queries with joins that deals with a large amount of data. * Experience in end to end Low-level design, development, administration, and delivery of ELK based reporting solutions. * Experience building rest APIs - Node/PHP/Python/Java. Preferred candidate profile * Willingness to cross train in Coveo Search Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Big Data Architect Skills Required 4 Years experience in Big data Architect Proficient in Spark, Scala, Hadoop MapReduce/HDFS, PIG, HIVE, AWS cloud computing Hands-on experience in tools like: EMR, EC2, Pentaho BI, Impala, ElasticSearch, Apache Kafka, Node.js, Redis, Logstash, statsD, Ganglia, Zeppelin, Hue, KETTLE Sound experience in Machine learning, Zookeeper, Bootstrap.js, Apache Flume, FluentD, Collectd, Sqoop, Presto, Tableau, R, GROK, MongoDB, Apache Storm, HBASE Hands-on experience in development - Core Java & Advanced Java Job Requirement: Bachelors degree in Computer Science, Information Technology, or MCA 4 Years of experience in Relevant Role Good analytical and problem solving ability Detail oriented with excellent written and verbal communication skills The ability to work independently as well as collaborating with a team. Experience: 10 Years Job Location: Pune/Hyderabad, India Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Big Data Architect Skills Required 4 Years experience in Big data Architect Proficient in Spark, Scala, Hadoop MapReduce/HDFS, PIG, HIVE, AWS cloud computing Hands-on experience in tools like: EMR, EC2, Pentaho BI, Impala, ElasticSearch, Apache Kafka, Node.js, Redis, Logstash, statsD, Ganglia, Zeppelin, Hue, KETTLE Sound experience in Machine learning, Zookeeper, Bootstrap.js, Apache Flume, FluentD, Collectd, Sqoop, Presto, Tableau, R, GROK, MongoDB, Apache Storm, HBASE Hands-on experience in development - Core Java & Advanced Java Job Requirement: Bachelors degree in Computer Science, Information Technology, or MCA 4 Years of experience in Relevant Role Good analytical and problem solving ability Detail oriented with excellent written and verbal communication skills The ability to work independently as well as collaborating with a team. Experience: 10 Years Job Location: Pune/Hyderabad, India Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2