Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2858 jobs matched
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Saleshandy – New Product Team | Build from Zero | Own What You Ship Experience Level: 3–5 years Core Focus: Distributed backend systems, job queues, infra observability, production readiness Tech Stack: Node.js, MySQL, Redis, Kafka, Grafana/Prometheus, AWS Bonus: ClickHouse, Elasticsearch, Docker Context: Join the founding backend team building a new AI-powered product at Saleshandy. You’ll shape systems from the ground up from architecture to API design, job schedulers to observability. What’s The Role About This isn’t just another backend role, this is your shot to build a product from scratch. You’ll join a small team launching a brand-new SaaS product, where you’ll own everything backend: queues, databases, infra, performance, and uptime. You’ll ship weekly, make tradeoffs, debug live, and influence both product and architecture without red tape or layers of approvals. Why Join Us Purpose: You’re not patching old code, you're laying the foundation for a new system that will be used by thousands. Growth: You’ll own entire domains fast. From system design to metrics to scaling you’ll learn by doing, not waiting. Motivation: If you've ever said, "I wish I could own something end-to-end, build cleanly, and move fast without BS," this is it. You will own the backend. Your Main Goals Build and Launch a Core Feature (within 30–45 days) Lead the development of a key backend module, a job scheduler, email engine, enrichment flow, or core DB model. Outcome: In prod, with logging, alerting, and rollback plan. Optimize Latency for High-Throughput Workloads (within 60 days) Own and improve performance across job processors or APIs. Outcome: ≥40% drop in P95 or P99 latency. Stand Up Observability and Alerts (within 60–90 days) Set up Grafana/Prometheus dashboards and actionable alerts for one major backend component. Outcome: Issue visibility before incidents. Own and Stabilize a Domain (within 90 days) Take ownership of a system (e.g., outbound queue, webhook processor), make it reliable, and evolve it. Outcome: 99.99% uptime, clean ownership. Important Tasks Design and build scalable backend services in Node.js Implement job workers with retries, rate limits, and deduplication Model relational data with MySQL and high-throughput writes Set up metrics and logs using Grafana, Prometheus, and tracing tools Debug production issues quickly using logs, traces, and infra metrics Collaborate directly with product/design for fast feedback loops Contribute to internal playbooks, code patterns, and architecture Culture Fit – Are You One of Us You ship small, frequent, production-ready PRs You write down ideas, edge cases, and fixes You unblock yourself with logs before asking for help You question vague specs early, then execute fully You treat the backend like a product clean, tested, observable
Posted 1 week ago
10.0 - 15.0 years
0 Lacs
Greater Chennai Area
On-site
Customers trust the Alation Data Intelligence Platform for self-service analytics, cloud transformation, data governance, and AI-ready data, fostering data-driven innovation at scale. With more than $340M in funding – valued at over $1.7 billion and nearly 600 customers, including 40% of the Fortune 100 — Alation helps organizations realize value from data and AI initiatives. Alation has been recognized in 2024 as one of Inc. Magazine's Best Workplaces for the fifth time, a testament to our commitment to creating an inclusive, innovative, and collaborative environment. Collaboration is at the forefront of everything we do. We strive to bring diverse perspectives together and empower each team member to contribute their unique strengths to live out our values each day. These are: Move the Ball, Build for the Long Term, Listen Like You’re Wrong, and Measure Through Customer Impact. Joining Alation means being part of a fast-paced, high-growth company where every voice matters, and where we’re shaping the future of data intelligence with AI-ready data. Join us on our journey to build a world where data culture thrives and curiosity is celebrated each day! Job Description As a Manager/Sr Manager of Technical Support at Alation you will lead the day to day operations of a team of Technical Support Engineers. You are leading a customer-facing team as a key leader in the customer success organization. You will be responsible for directly monitoring, reporting, and driving improvements to team-level metrics and KPIs, acting as an escalation point with customers and internal teams, and optimizing and developing support processes and tools. Your work will be cross-functional and will involve working with engineering, QA, DevOps, product management, and sales. Location is Chennai (Hybrid Model). What You’ll Do Manage a team of senior-level Technical Support Engineers Develop capacity forecasts and resource allocation models to ensure proper coverage Drive the scaling, onboarding, and ongoing specialization of the team Implementing innovative process to increase support efficiency and increasing overall customer satisfaction Handle customer escalations and assist with troubleshooting and triaging incidents Manage the backlog and ensure that Support SLAs and KPIs are met Partner with Engineering & Product to prioritize issues and product improvements You Should Have 10-15 years of enterprise application support or operations experience, supporting customers in On Premise, Cloud, and Hybrid setups. Excellent communication skills, with a strong ability to discuss complex technical concepts with customers, engineers, and product managers Prior experience managing a team of frontline and senior-level Support Engineers Solid understanding of data platforms, data management, analytics or the BI space Excellent communication skills, with a strong ability to discuss complex technical concepts with customers, engineers, and product managers Self-starter with strong creative problem-solving, facilitation and interpersonal skills First-hand leadership experience working in a global organization and partnering with regional managers and leads to ensure a seamless customer experience Experience troubleshooting Linux and running shell commands Understanding of Relational Databases, such Oracle and Postgres. SQL is a must. A big plus if you have experience in the following areas: Postgres (DB internals) Elasticsearch, NoSQL, MongoDB Hadoop Ecosystem (Hive, HBase) Cloud technologies and frameworks such as Kubernetes and Docker Experience scoping or building tooling to improve the support experience Alation, Inc. is an Equal Employment Opportunity employer. All qualified applicants will receive consideration for employment without regards to that individual’s race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, veteran status, genetic information, ethnicity, citizenship, or any other characteristic protected by law. The Company will strive to provide reasonable accommodations to permit qualified applicants who have a need for an accommodation to participate in the hiring process (e.g., accommodations for a job interview) if so requested. This company participates in E-Verify. Click on any of the links below to view or print the full poster. E-Verify and Right to Work.
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
Sahibzada Ajit Singh Nagar, Punjab, India
On-site
Job Description Experience: 2-4 Years Qualification: Bachelor’s degree in Computer Science, B.Tech in IT or CSE, MCA, MSc IT, or any related field. Work Mode: Onsite - Mohali, PB Shift Timings: 12 PM to 10 PM (Afternoon Shift) Job Role And Responsibilities Design and implement complex algorithms for critical functionalities Take up system analysis, design, and documenting responsibilities. Obtain performance metrics of applications and optimize applications Can handle and plan project milestones and deadlines. Design database architecture and write MySQL queries Design and implementation of highly scalable multi-threaded applications. Technical background Strong Knowledge of Java and web services, and Design Patterns Good logical, problem-solving, and troubleshooting ability to work on large-scale products. Expertise in Code Optimization, Performance improvement, working Knowledge for Java/Mysql Profiler, etc. Strong Ability to debug, understand the problem, find the root cause, and apply the best possible solution. Knowledge of Regular Expressions, Solr, Elastic Search, NLP, Text Processing, or any ML libraries. Fast Learner, Problem-solving and troubleshooting. Minimum Skills We Look For Strong programming skills in Core Java, J2EE, and Java Web Services (REST/SOAP). Good understanding of Object-Oriented Design (OOD) and Design Patterns. Experience in performance tuning, code optimization, and use of Java/MySQL profilers. Proven ability to debug, identify root causes, and implement effective solutions. Solid experience with MySQL and relational database design. Working knowledge of multi-threaded application development. Familiarity with search technologies like Solr, Elasticsearch, or NLP/Text Processing tools. Understanding of Regular Expressions and data parsing. Exposure to Spring Framework, Hibernate, or Microservices Architecture is a plus. Experience with tools like Git, Maven, JIRA, and CI/CD pipelines is advantageous.
Posted 1 week ago
1.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities Build, Train, and Deploy ML Models using Python on Azure/AWS 1+ years of Experience in building Machine Learning and Deep Learning models in Python Experience on working on AzureML/AWS Sagemaker Ability to deploy ML models with REST based APIs Proficient in distributed computing environments / big data platforms (Hadoop, Elasticsearch, etc.) as well as common database systems and value stores (SQL, Hive, HBase, etc.) Ability to work directly with customers with good communication skills. Ability to analyze datasets using SQL, Pandas Experience of working on Azure Data Factory, PowerBI Experience on PySpark, Airflow etc. Experience of working on Docker/Kubernetes Mandatory Skill Sets Data Science, Machine Learning Preferred Skill Sets Data Science, Machine Learning Years Of Experience Required 4 - 8 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Master of Engineering, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Science Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
COMPANY PROFILE Bain & Company is the management consulting firm that the world’s business leaders come to when they want results. Bain advises clients on strategy, operations, information technology, organization, private equity, digital transformation and strategy, and mergers and acquisition, developing practical insights that clients act on and transferring skills that make change stick. The firm aligns its incentives with clients by linking its fees to their results. Bain clients have outperformed the stock market 4 to 1. Founded in 1973, Bain has offices in various countries, and its deep expertise and client roster cross every industry and economic sector. POSITION SUMMARY A Software Engineer is someone with a proven track record of delivering application modules with minimal to no supervision from senior engineering team members. Additionally, the individual is also responsible of providing guidance to associate/ entry level engineers in the team. This position works as a member of an Agile/scrum software development team focused exclusively on building and supporting Bain’s most strategic internal software systems. Team members work collaboratively to design, build and implement new features and functionality in their systems aimed at delivering the most value to Bain’s global users and supporting key business initiatives. Systems developed are primarily enterprise-scale browser based or mobile applications developed with current Microsoft development languages and technologies, with a global user base and integration points with one or more other internal Bain systems. RESPONSIBILITIES AND DUTIES Technical Delivery (80%) • Work with teams developing and updating enterprise applications. Work as a member of an agile software development team with full participation in all Agile team events and activities. • Demonstrate ability to identify all technical steps required to complete a story. Work with the senior team members to evaluate product backlog items and functional specifications and determine the appropriate approach to developing the required functionality in the software application. • Demonstrate high-level business and domain knowledge and ability to understand and achieve business outcomes. • Work in collaboration with other team members on an agile team to analyze user stories and perform task breakdown and complete committed tasks as per sprint schedule. • Demonstrate good understanding of using the underlying infrastructure to develop his/her features. • Follow the standard application design and architecture to develop his/her features and work with senior team members to ensure non-functional metrics (e.g. scalability, performance) are met. • Prepare work estimates for committed tasks and components with support from senior team members. • Write unit test plans for committed components. Execute and confirm successful completion of unit tests as part of the criteria for completion. • Participate in the testing and implementation of applications releases. • Provide ongoing support for applications already in use. This includes problem diagnosis and resolution, ad hoc reporting support and database administration. • Demonstrate ability to acquire new skills e.g. creating automation tests using selenium, creating UX designs, DevOps, performing functional/ load testing etc. through internal/ external trainings to be a T-Shaped team member helping the team in achieving sprint goals. • Provide input during sprint retrospective to improve team experience. • Follow Bain development project process and standards in completing committed tasks and modules and contribute to the continual evolution of processes and standards. • With guidance, write technical documentation as required. Research(10%) • Contribute to evaluating and employing new and/or supplemental technologies necessary to deliver functionality for a given software application. • Contribute to the research and evaluation of new tools and technologies beyond current product requirements that are likely to be used in future initiatives. Help in the presentation of findings and recommendations to the full Software Development team. • Participate in internal skill development by sharing concepts and technologies with the full Software Development team. Communication (10%) • Ability to present the technical findings and recommendations to the Software Development team. • Clearly communicates impediments to completing a story and ensures clear understanding of definition of ‘done’. • Provide input during sprint retrospective to improve team experience. KNOWLEDGE, SKILLS & ABILITIES Frameworks: .NET & .NET Core Languages: C#, T-SQL Web frameworks/ Lib: Angular/React, JavaScript, HTML, CSS, Bootstrap, etc. RDBMS: Microsoft SQL Server Cloud: Microsoft Azure Services Unit testing: XUnit, Jasmine, etc. DevOps: GitActions Web frameworks/ Lib: React Search engine: Elasticsearch, Coveo, etc. NoSQL databases: MongoDB, Cosmos, etc. Caching: Redis, MemCache Preferred skills: Python & GenAI Demonstrated knowledge of agile software development methodologies and processes Demonstrated record of strong performance in prior software development positions, Strong communication and customer service skills Strong analytic and problem-solving skills QUALIFICATIONS Bachelor’s or equivalent degree 3-5 years of experience Experience developing enterprise scale applications Demonstrated knowledge of agile software development methodologies and processes. Demonstrated record of strong performance in prior software development positions. Strong communication and customer service skills Strong analytic and problem-solving skills Demonstrated record of T shape behavior to expedite delivery by managing conflicts/ contingencies
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Position Overview Autodesk is looking for a Full-stack Software Engineer to join our Fusion Operations team. We are looking for a person who is enthusiastic about delivering innovative solutions aimed at providing a Device Independent experience. The ideal candidate will have experience in all aspects of software development for desktop and web applications Fusion Operations is a manufacturing execution system (MES), providing real-time data for production management. It enables users to monitor, track, report, and even control the various processes and systems used to manufacture goods, from raw material to shipping Plan production schedules to enhance the flexibility of job scheduling Track inventory, monitor workers, and oversee machines to help optimize overall production efficiency Manage product quality through production traceability to align with industry standards and regulations Responsibilities Lead the design, implementation, testing and maintenance of the application Produce clean, effective, secure, maintainable, and well-documented code Collaborate closely with cross-functional teams to align on project goals and timelines Utilize debugging techniques to troubleshoot and resolve issues efficiently Develop and maintain automated tests and increase overall code coverage Leverage cloud technologies, including AWS services such as S3, SQS, and RDS, for scalable and reliable solutions. Participate in on-call rotation to support production systems Minimum Qualifications Bachelor’s degree in computer science, Engineering, Mathematics, or related field. 3+ years of industry experience building and delivering robust, performant, and maintainable commercial applications. Strong understanding of object-oriented programming principles. Proficiency in Java, with a good understanding of its ecosystems Proficiency in frontend development such as JavaScript/HTML/CSS. Familiar with the concepts of MVC (Model-View-Controller) Pattern, JDBC (Java Database Connectivity), and RESTful web services Knowledge of JVM (Java Virtual Machine), its drawbacks, weaknesses, and workarounds Experience with MySQL databases Familiarity with Agile methodologies and working in a Scrum framework. Excellent problem-solving skills and ability to adapt to changing priorities. Strong verbal and written communication skills in English. Preferred Qualifications Experience in working with Java frameworks such as Play or Spring Experience in frontend frameworks such as Vue.js, React or similar Experience with Elasticsearch or similar Experience with test automation tools (JUnit, Selenium, etc) Experience with build and CI/CD tools such as Jenkins Basic understanding of event-driven architecture principles Familiarity with CAD concepts related to Inventor, AutoCAD, Factory Design Utilities #LS-K2
Posted 1 week ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Apply Now Bangkok, Thailand About Agoda Agoda is an online travel booking platform for accommodations, flights, and more. We build and deploy cutting-edge technology that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, plus flights, activities, and more. Based in Asia and part of Booking Holdings, our 7,100+ employees representing 95+ nationalities in 27 markets foster a work environment rich in diversity, creativity, and collaboration. We innovate through a culture of experimentation and ownership, enhancing the ability for our customers to experience the world. Our Purpose – Bridging the World Through Travel We believe travel allows people to enjoy, learn and experience more of the amazing world we live in. It brings individuals and cultures closer together, fostering empathy, understanding and happiness. We are a skillful, driven and diverse team from across the globe, united by a passion to make an impact. Harnessing our innovative technologies and strong partnerships, we aim to make travel easy and rewarding for everyone. About Agoda Agoda is an online travel booking platform for accommodations, flights, and more. We build and deploy cutting-edge technology that connects travelers with a global network of 4.7M hotels and holiday properties worldwide, plus flights, activities, and more. Based in Asia and part of Booking Holdings, our 7,100+ employees representing 95+ nationalities in 27 markets foster a work environment rich in diversity, creativity, and collaboration. We innovate through a culture of experimentation and ownership, enhancing the ability for our customers to experience the world. Our Purpose – Bridging the World Through Travel We believe travel allows people to enjoy, learn and experience more of the amazing world we live in. It brings individuals and cultures closer together, fostering empathy, understanding and happiness. We are a skillful, driven and diverse team from across the globe, united by a passion to make an impact. Harnessing our innovative technologies and strong partnerships, we aim to make travel easy and rewarding for everyone. Get to Know our Team Join Agoda’s Wallet team to help build and scale innovative financial products for travelers. This is a greenfield project using the latest technology stack, offering the chance to work on high-impact solutions in a collaborative, fast-paced environment. The Opportunity We are looking for Software Engineers to join our Payment team. You’ll play a key role in designing, developing, and scaling new payment features, supporting multi-currency conversion and card issuance for global travelers. You’ll be using the most current technologies and best practices to accomplish our goals. Our typical day involves the creation of new end-to-end systems, building advanced architecture, developing new features, and working in a culture that is always looking to improve our quality, tools, and efficiency. In this Role, you’ll get to: Design, develop, and maintain robust, scalable, and secure features for the payment system Collaborate with cross-functional teams to deliver high-quality solutions aligned with business goals Participate in code reviews, technical discussions, and contribute to best practices Troubleshoot, debug, and optimize systems for performance and reliability Stay up-to-date with industry trends and emerging technologies in payments and digital wallets Contribute to a greenfield project with ambitious long-term plans, tackling real-time foreign exchange and hedging challenges What you’ll Need to Succeed: Overall experience of 5+ years of experience developing web applications in client-side frameworks especially React.js 3+ years of experience in fintech or traditional finance with FX (foreign exchange) experience B.S. in Computer Science or quantitative field; M.S. preferred Working experience with agile, analytics, A/B testing and/or feature flags, Continuous Delivery, Trunk-based Development Excellent HTML/CSS skills – you understand not only how to build the data, but how to make it look great too Excellent understanding of object-oriented TypeScript You love new technologies and approaches and want to use the best tools available. We want people who can help us continually evolve our stack Great communication and coordination skills Excellent analytical thinking and problem-solving skills You have a good command of the English language It’s Great if you have: Familiarity with card issuing, FX, or FX quote/rates or VDC (Virtual Debit Card) Experience working in agile, cross-functional teams Knowledge in physical architecture at scale, building resilient, no single point of failures, highly available solutions Knowledge in one or more of the following: NoSQL technologies (Cassandra, ScyllaDB, ElasticSearch, Redis, DynamoDB, etc), Queueing system experience (Kafka, RabbitMQ, SQS, Azure Service Bus, etc) Working Experience with Containers and Dockerization, also K8S is a plus Knowledge and hands on experience in CI/CD solutions would be a plus Strong experience in all aspects of client-side performance optimization Extremely proficient in modern coding and design practices. For example, Clean Code, SOLID principals, and TDD Experience in multiple front-end platforms including iOS, Android, Web, and API services Have worked on an app or internet company that is at scale with large numbers of users and transactions per second Have experience in a data driven company with experience analyzing and working with Big Data Lead teams and greenfield projects solving large system problems Worked on global projects serving world markets with distributed data centers and localization of the front end and data This position is based in Bangkok, Thailand (Relocation Provided) #bangalore #sanfrancisco #newyork #seattle #hyderabad #Pune #London #Delhi #Chennai #Toronto #Dallas #losangeles #washingtonDC #Austin #Chicago #Atlanta #SaoPaulo #mumbai #vancouver #IT #ENG #4 Equal Opportunity Employer At Agoda, we pride ourselves on being a company represented by people of all different backgrounds and orientations. We prioritize attracting diverse talent and cultivating an inclusive environment that encourages collaboration and innovation. Employment at Agoda is based solely on a person’s merit and qualifications. We are committed to providing equal employment opportunity regardless of sex, age, race, color, national origin, religion, marital status, pregnancy, sexual orientation, gender identity, disability, citizenship, veteran or military status, and other legally protected characteristics. We will keep your application on file so that we can consider you for future vacancies and you can always ask to have your details removed from the file. For more details please read our privacy policy. Disclaimer We do not accept any terms or conditions, nor do we recognize any agency’s representation of a candidate, from unsolicited third-party or agency submissions. If we receive unsolicited or speculative CVs, we reserve the right to contact and hire the candidate directly without any obligation to pay a recruitment fee. Equal Opportunity Employer At Agoda, we pride ourselves on being a company represented by people of all different backgrounds and orientations. We prioritize attracting diverse talent and cultivating an inclusive environment that encourages collaboration and innovation. Employment at Agoda is based solely on a person’s merit and qualifications. We are committed to providing equal employment opportunity regardless of sex, age, race, color, national origin, religion, marital status, pregnancy, sexual orientation, gender identity, disability, citizenship, veteran or military status, and other legally protected characteristics. We will keep your application on file so that we can consider you for future vacancies and you can always ask to have your details removed from the file. For more details please read our privacy policy. Disclaimer We do not accept any terms or conditions, nor do we recognize any agency’s representation of a candidate, from unsolicited third-party or agency submissions. If we receive unsolicited or speculative CVs, we reserve the right to contact and hire the candidate directly without any obligation to pay a recruitment fee. Copy Link Line WeChat LinkedIn Email
Posted 1 week ago
1.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities: · Build, Train, and Deploy ML Models using Python on Azure/AWS · 1+ years of Experience in building Machine Learning and Deep Learning models in Python · Experience on working on AzureML/AWS Sagemaker · Ability to deploy ML models with REST based APIs · Proficient in distributed computing environments / big data platforms (Hadoop, Elasticsearch, etc.) as well as common database systems and value stores (SQL, Hive, HBase, etc.) · Ability to work directly with customers with good communication skills. · Ability to analyze datasets using SQL, Pandas · Experience of working on Azure Data Factory, PowerBI · Experience on PySpark, Airflow etc. · Experience of working on Docker/Kubernetes Mandatory skill sets: Data Science, Machine Learning Preferred skill sets: Data Science, Machine Learning Years of experience required: 4 - 8 Education qualification: B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Science Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 week ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role We are looking for a passionate and experienced Backend Developer with solid experience in PHP and Node.js and a strong understanding of modern backend architectures and containerized deployments. The ideal candidate will have hands-on experience working with MySQL, MongoDB, Redis, and Elasticsearch and be familiar with Docker-based workflows. Experience with AWS and Kubernetes (EKS) will be a big plus. Key Responsibilities: ● Design, build, and maintain robust backend services and APIs using PHP (Laravel/Symfony) and Node.js. ● Build and optimize high-performance, scalable microservices and RESTful APIs. ● Work with relational and NoSQL databases: MySQL, MongoDB, Redis, Elasticsearch. ● Implement and manage containerized applications using Docker. ● Collaborate with DevOps to deploy services using AWS ECS/EKS and manage infrastructure via AWS. ● Optimize application performance, security, and scalability. ● Participate in code reviews, technical discussions, and architectural decisions. ● Maintain documentation for backend systems and APIs. Requirements: ✅ Must-Have Skills ● 4–8 years of backend development experience. ● Good at problem solving ● Proficiency in PHP (Laravel/Symfony) and Node.js. ● Strong experience with MySQL, MongoDB, Redis, and Elasticsearch. ● Hands-on experience with Docker for local development and production builds. ● Solid understanding of API design, versioning, and integration. ● Experience with Git and CI/CD workflows. 🌟 Good to Have: ● Experience with AWS services: EC2, S3, RDS, Lambda, CloudWatch, etc. ● Familiarity with EKS (Elastic Kubernetes Service) or similar orchestration tools. ● Understanding of messaging systems (Kafka, SQS, RabbitMQ) is a plus. ● Exposure to monitoring/logging tools like ELK, Prometheus, or Grafana.
Posted 1 week ago
2.0 - 3.0 years
0 Lacs
Hyderābād
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. The opportunity We are the only professional services organization who has a separate business dedicated exclusively to the financial services marketplace. Join Digital Engineering Team and you will work with multi-disciplinary teams from around the world to deliver a global perspective. Aligned to key industry groups including Asset management, Banking and Capital Markets, Insurance and Private Equity, Health, Government, Power and Utilities, we provide integrated advisory, assurance, tax, and transaction services. Through diverse experiences, world-class learning and individually tailored coaching you will experience ongoing professional development. That’s how we develop outstanding leaders who team to deliver on our promises to all of our stakeholders, and in so doing, play a critical role in building a better working world for our people, for our clients and for our communities. Sound interesting? Well, this is just the beginning. Because whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. We’re looking for an AWS Cloud Native – Full Stack Engineer at EY GDS. You will work on designing and implementing cloud-native applications and services using AWS technologies. You will collaborate with development teams to build, deploy, and manage applications that meet business needs and leverage AWS best practices. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of a new service offering. We are the only professional services organization who has a separate business dedicated exclusively to the financial and non-financial services marketplace. Join Digital Engineering team and you will work with multi-disciplinary teams from around the world to deliver a global perspective. EY Digital Engineering is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional capability and product knowledge. The Digital Engineering (DE) practice works with clients to analyse, formulate, design, mobilize and drive digital transformation initiatives. We advise clients on their most pressing digital challenges and opportunities surround business strategy, customer, growth, profit optimization, innovation, technology strategy, and digital transformation. We also have a unique ability to help our clients translate strategy into actionable technical design, and transformation planning/mobilization. Through our unique combination of competencies and solutions, EY’s DE team helps our clients sustain competitive advantage and profitability by developing strategies to stay ahead of the rapid pace of change and disruption and supporting the execution of complex transformations. Your key responsibilities Application Development: Design and develop cloud-native applications and services using Angular/React/Typescript, Java Springboot /Node, AWS services such as Lambda, API Gateway, ECS, EKS, and DynamoDB, Glue, Redshift, EMR. Deployment and Automation: Implement CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy to automate application deployment and updates. Architecture Design: Collaborate with architects and other engineers to design scalable and secure application architectures on AWS. Performance Tuning: Monitor application performance and implement optimizations to enhance reliability, scalability, and efficiency. Security: Implement security best practices for AWS applications, including identity and access management (IAM), encryption, and secure coding practices. Container Services Management: Design and deploy containerized applications using AWS services such as Amazon ECS (Elastic Container Service), Amazon EKS (Elastic Kubernetes Service), and AWS Fargate. Configure and manage container orchestration, scaling, and deployment strategies. Optimize container performance and resource utilization by tuning settings and configurations. Application Observability: Implement and manage application observability tools such as AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack (Elasticsearch, Logstash, Kibana). Develop and configure monitoring, logging, and alerting systems to provide insights into application performance and health. Create dashboards and reports to visualize application metrics and logs for proactive monitoring and troubleshooting. Integration: Integrate AWS services with application components and external systems, ensuring smooth and efficient data flow. Troubleshooting: Diagnose and resolve issues related to application performance, availability, and reliability. Documentation: Create and maintain comprehensive documentation for application design, deployment processes, and configuration. Skills and attributes for success Required Skills: AWS Services: Proficiency in AWS services such as Lambda, API Gateway, ECS, EKS, DynamoDB, S3, and RDS, Glue, Redshift, EMR. Programming: Strong programming skills in languages such as Python, Java, or Node.js, Angular/React/Typescript. CI/CD: Experience with CI/CD tools and practices, including AWS CodePipeline, CodeBuild, and CodeDeploy. Infrastructure as Code: Familiarity with IaC tools like AWS CloudFormation or Terraform for automating application infrastructure. Security: Understanding of AWS security best practices, including IAM, KMS, and encryption. Observability Tools: Proficiency in using observability tools like AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack. Container Orchestration: Knowledge of container orchestration concepts and tools, including Kubernetes and Docker Swarm. Monitoring: Experience with monitoring and logging tools such as AWS CloudWatch, CloudTrail, or ELK Stack. Collaboration: Strong teamwork and communication skills with the ability to work effectively with cross-functional teams. Preferred Qualifications: Certifications : AWS Certified Solutions Architect – Associate or Professional, AWS Certified Developer – Associate, or similar certifications. Experience: 2-3 Years previous experience in an application engineering role with a focus on AWS technologies. Agile Methodologies: Familiarity with Agile development practices and methodologies. Problem-Solving: Strong analytical skills with the ability to troubleshoot and resolve complex issues. Education: Degree: Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field, or equivalent practical experience What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 1 week ago
5.0 years
5 - 10 Lacs
Hyderābād
On-site
DESCRIPTION The AOP (Analytics Operations and Programs) team is responsible for creating core analytics, insight generation and science capabilities for ROW Ops. We develop scalable analytics applications, AI/ML products and research models to optimize operation processes. You will work with Product Managers, Data Engineers, Data Scientists, Research Scientists, Applied Scientists and Business Intelligence Engineers using rigorous quantitative approaches to ensure high quality data/science products for our customers around the world. We are looking for a Sr.Data Scientist to join our growing Science Team. As Data Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. You will be responsible for building ML models to solve complex business problems and test them in production environment. The scope of role includes defining the charter for the project and proposing solutions which align with org's priorities and production constraints but still create impact. You will achieve this by leveraging strong leadership and communication skills, data science skills and by acquiring domain knowledge pertaining to the delivery operations systems. You will provide ML thought leadership to technical and business leaders, and possess ability to think strategically about business, product, and technical challenges. You will also be expected to contribute to the science community by participating in science reviews and publishing in internal or external ML conferences. Our team solves a broad range of problems that can be scaled across ROW (Rest of the World including countries like India, Australia, Singapore, MENA and LATAM). Here is a glimpse of the problems that this team deals with on a regular basis: Using live package and truck signals to adjust truck capacities in real-time HOTW models for Last Mile Channel Allocation Using LLMs to automate analytical processes and insight generation Ops research to optimize middle mile truck routes Working with global partner science teams to affect Reinforcement Learning based pricing models and estimating Shipments Per Route for $MM savings Deep Learning models to synthesize attributes of addresses Abuse detection models to reduce network losses Key job responsibilities 1. Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes 2. Design, develop, evaluate and deploy, innovative and highly scalable ML/OR models 3. Work closely with other science and engineering teams to drive real-time model implementations 4. Work closely with Ops/Product partners to identify problems and propose machine learning solutions 5. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance 6. Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production 7. Leading projects and mentoring other scientists, engineers in the use of ML techniques BASIC QUALIFICATIONS 5+ years of data scientist experience Experience with data scripting languages (e.g. SQL, Python, R etc.) or statistical/mathematical software (e.g. R, SAS, or Matlab) Experience with statistical models e.g. multinomial logistic regression Experience in data applications using large scale distributed systems (e.g., EMR, Spark, Elasticsearch, Hadoop, Pig, and Hive) Experience working with data engineers and business intelligence engineers collaboratively Demonstrated expertise in a wide range of ML techniques PREFERRED QUALIFICATIONS Experience as a leader and mentor on a data science team Master's degree in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science Expertise in Reinforcement Learning and Gen AI is preferred Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
5.0 years
5 - 10 Lacs
Gurgaon
On-site
DESCRIPTION The AOP (Analytics Operations and Programs) team is responsible for creating core analytics, insight generation and science capabilities for ROW Ops. We develop scalable analytics applications, AI/ML products and research models to optimize operation processes. You will work with Product Managers, Data Engineers, Data Scientists, Research Scientists, Applied Scientists and Business Intelligence Engineers using rigorous quantitative approaches to ensure high quality data/science products for our customers around the world. We are looking for a Sr.Data Scientist to join our growing Science Team. As Data Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. You will be responsible for building ML models to solve complex business problems and test them in production environment. The scope of role includes defining the charter for the project and proposing solutions which align with org's priorities and production constraints but still create impact. You will achieve this by leveraging strong leadership and communication skills, data science skills and by acquiring domain knowledge pertaining to the delivery operations systems. You will provide ML thought leadership to technical and business leaders, and possess ability to think strategically about business, product, and technical challenges. You will also be expected to contribute to the science community by participating in science reviews and publishing in internal or external ML conferences. Our team solves a broad range of problems that can be scaled across ROW (Rest of the World including countries like India, Australia, Singapore, MENA and LATAM). Here is a glimpse of the problems that this team deals with on a regular basis: Using live package and truck signals to adjust truck capacities in real-time HOTW models for Last Mile Channel Allocation Using LLMs to automate analytical processes and insight generation Ops research to optimize middle mile truck routes Working with global partner science teams to affect Reinforcement Learning based pricing models and estimating Shipments Per Route for $MM savings Deep Learning models to synthesize attributes of addresses Abuse detection models to reduce network losses Key job responsibilities 1. Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes 2. Design, develop, evaluate and deploy, innovative and highly scalable ML/OR models 3. Work closely with other science and engineering teams to drive real-time model implementations 4. Work closely with Ops/Product partners to identify problems and propose machine learning solutions 5. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance 6. Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production 7. Leading projects and mentoring other scientists, engineers in the use of ML techniques BASIC QUALIFICATIONS 5+ years of data scientist experience Experience with data scripting languages (e.g. SQL, Python, R etc.) or statistical/mathematical software (e.g. R, SAS, or Matlab) Experience with statistical models e.g. multinomial logistic regression Experience in data applications using large scale distributed systems (e.g., EMR, Spark, Elasticsearch, Hadoop, Pig, and Hive) Experience working with data engineers and business intelligence engineers collaboratively Demonstrated expertise in a wide range of ML techniques PREFERRED QUALIFICATIONS Experience as a leader and mentor on a data science team Master's degree in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science Expertise in Reinforcement Learning and Gen AI is preferred Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
2.0 - 4.0 years
3 - 6 Lacs
Mohali
On-site
Job Description Experience: 2-4 Years Qualification: Bachelor’s degree in Computer Science, B.Tech in IT or CSE, MCA, MSc IT, or any related field. Work Mode: Onsite - Mohali, PB Shift Timings: 12 PM to 10 PM (Afternoon Shift) Job Role and Responsibilities: Design and implement complex algorithms for critical functionalities Take up system analysis, design, and documenting responsibilities. Obtain performance metrics of applications and optimize applications Can handle and plan project milestones and deadlines. Design database architecture and write MySQL queries Design and implementation of highly scalable multi-threaded applications. Technical background Strong Knowledge of Java and web services, and Design Patterns Good logical, problem-solving, and troubleshooting ability to work on large-scale products. Expertise in Code Optimization, Performance improvement, working Knowledge for Java/Mysql Profiler, etc. Strong Ability to debug, understand the problem, find the root cause, and apply the best possible solution. Knowledge of Regular Expressions, Solr, Elastic Search, NLP, Text Processing, or any ML libraries. Fast Learner, Problem-solving and troubleshooting. Minimum skills we look for Strong programming skills in Core Java, J2EE, and Java Web Services (REST/SOAP). Good understanding of Object-Oriented Design (OOD) and Design Patterns. Experience in performance tuning, code optimization, and use of Java/MySQL profilers. Proven ability to debug, identify root causes, and implement effective solutions. Solid experience with MySQL and relational database design. Working knowledge of multi-threaded application development. Familiarity with search technologies like Solr, Elasticsearch, or NLP/Text Processing tools. Understanding of Regular Expressions and data parsing. Exposure to Spring Framework, Hibernate, or Microservices Architecture is a plus. Experience with tools like Git, Maven, JIRA, and CI/CD pipelines is advantageous.
Posted 1 week ago
15.0 years
0 Lacs
Noida
On-site
Join our Team About this opportunity: The Head of Automation owns and leads Automation strategy and execution providing leadership and vision to the organization. Collaborating closely with the other Heads of Department and Individual Contributors to ensure E2E management and success of delivery. What you will do: Drive operational efficiency and productivity through quality automation models, aligning with SDE targets and boosting automation saturation in MS Networks. Focus on stable automation performance with reduced outages and stronger operational outcomes. Collaborate with SDAP for streamlined automation monitoring, issue tracking, and reporting. Align automation initiatives with BCSS MS Networks’ AI/ML strategy. Enhance communication on automation benefits and their business impact. Manage O&M, lifecycle, and performance of SL Operate tools, ensuring clear automation SLAs and effective tracking. Contribute to service architecture strategies (EOE, AAA) to maximize automation value and roadmap alignment. Institutionalize best practices and automate internal team processes to reduce manual efforts. The skills you bring: 15+ years of experience in managed services environment, with minimum 8+ years of experience in Managed Services operations University degree in Engineering, Mathematics or Business Administration, MBA is a plus. Strong grasp of managed services delivery and Ericsson SD processes. Deep understanding of operator business needs and service delivery models. Skilled in Ericsson process measurement tools and SL Operate/SDAP MSDP environments (eTiger, ACE, etc.). Technically proficient in OOAD, design patterns, and development on Unix/Linux/Windows with Java, JS, DB, Shell scripts, and monitoring tools like Nagios. Familiar with software component interactions and DevOps practices. Hands-on with automation tools (Enable, Blue Prism, MATE) and monitoring platforms (Grafana, Zabbix, Elasticsearch, Graylog). Strong experience with web/proxy/app servers (Tomcat, Nginx, Apache). Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Noida Req ID: 770844
Posted 1 week ago
6.0 years
4 - 8 Lacs
Noida
On-site
Join our Team Job Summary: We are seeking a highly skilled and certified professional with deep expertise in Zabbix, ELK stack, modern monitoring & observability tools, and DevOps practices. The ideal candidate will have strong experience in infrastructure automation, 3rd-party integrations, multi-database environments, and cloud-native deployments. This role demands a proactive problem-solver with a passion for performance, reliability, and scalable solutions in hybrid and cloud environments. Key Responsibilities: Design, implement, and maintain robust monitoring solutions using Zabbix, Grafana, and the ELK stack (Elasticsearch, Logstash, Kibana). Develop automation scripts and CI/CD pipelines using Python, Golang, Jenkins, and GitOps practices. Collaborate with development and operations teams to implement observability standards across environments (on-prem & cloud). Maintain, optimize, and troubleshoot integrations with BMC Remedy, ServiceNow, or other ticketing systems. Manage and support diverse database environments: MariaDB, MongoDB, MySQL, PostgreSQL, Oracle. Provide support for both Linux and Windows based systems in production. Design and support microservice deployment workflows using GCP, GKE, and container orchestration tools. Work closely with platform and application teams to ensure end-to-end visibility and service reliability. Drive best practices for infrastructure-as-code, environment provisioning, and configuration management. Lead efforts on third-party integrations to enhance platform functionality and automation. Required Skills & Experience: Proven experience with Zabbix, ELK, and Grafana for real-time monitoring and logging. Hands-on experience with DevOps tools and CI/CD pipelines (Jenkins, Git, etc.). Strong scripting and automation experience in Python, Shell, Golang, or Java Spring Boot. Solid understanding of GCP services and GKE orchestration; GCP certification is highly preferred. In-depth experience working with databases like MongoDB, MySQL, MariaDB, PostgreSQL, Oracle. Working knowledge of BMC ITSM, Remedy, or similar ITSM/ticketing platforms. Proficiency in administering Linux and Windows environments. Experience with infrastructure monitoring, alerting, log management, and performance tuning. Familiarity with third-party integrations (REST APIs, Webhooks, etc.) and system interoperability. Strong understanding of cloud-native architectures, service reliability, and infrastructure scaling. Preferred Qualifications: GCP Professional or GKE certification. Experience with Terraform, Ansible, or similar IAC tools. Familiarity with container security and compliance frameworks. Exposure to enterprise-grade SRE/Observability practices. CP Professional Cloud Architect / GKE Certified Experience Level : 6+ Years Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770694
Posted 1 week ago
12.0 years
0 Lacs
Andhra Pradesh, India
On-site
Job Description Expert knowledge with the Elasticsearch platform Must have built an enterprise level ES solution before, preferably more than once . Design and implement highly scalable ELK (ElasticSearch, Logstash and Kibana) stack solutions. Hands on experience in semantic search, vector search, natural language processing (NLP) Mandatory Experience with Elasticsearch - Advanced level for Data Processing, Storage and Visualization In a large-scale enterprise environment. Fine tune bulk load process. Good experience in query languages and writing complex queries with joins that deals with a large amount of data. Ability to independently build clusters, index templates and pipelines Knowledge using Stack monitor and APIs to monitor node health. Experience with building multiple clusters, shards, indices, alias indices et Requirements 12+ years of overall experience with 8+ years of relevant experience Bachelors degree or equivalent or higher education Ability to multitask and prioritize in a fast paced, team-oriented environment Strong Analytical and Communication skills
Posted 1 week ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role: Full Stack Developer Industry Type: Product/ Legal Tech Department: Engineering Employment Type: Full Time/Permanent Role Category: Software Development Location: On-Site (Pune) Timings: 9-6 PM (Mon-Fri) About the Company NSM (Product of Taqafa Labs Technologies Private Limited, Pune, India) is a subsidiary MNC company of NewMind Bilgi Yonetim Sistemleri A.S, a proprietary software product development company in the legal tech domain with its headquarters based in Istanbul, Turkey. Established more than 10 years ago, NewMind is a leader in legal tech in Turkey with over 100+ Engineers and two offices in Istanbul, Turkey. NSM office in Pune has over 17+ engineers. Some of the cutting-edge technologies used by NewMind to automate legal workflows are Artificial Intelligence, Turkish NLP, Machine Learning, Speech-to-Text, Generative AI, Agentic AI, LLMs, RAG, Orchestration of ML expert agents, Metaverse, Web3, AR/VR, Data Visualization and much more! NSM team is growing, and we are looking for individuals who love challenges and find happiness in solving problems and collaborating with industry’s best and brightest and thrive to stay up to the latest trends in technologies. We are eagerly searching for talented and dedicated engineers to join our growing team in NSM India! Location : Pune, India Timing : (5 days working Mon-Fri) Experience : 4+ yrs Job description We are seeking a senior Node.js Full Stack Developer to work with our subsidiary in India. You will be part of our software development team in Pune,India and help us build NewMind’s innovative software platform. You will use your skills in Javascript, Node.js and React.js and their frameworks to create powerful web applications. You will collaborate with cross-functional teams such as UX/UI designers, scrum masters, product managers, and other developers in Turkey or abroad. You will be involved in code review, testing, following best code practices and deploying web applications to support our digital products. You will be responsible for the full product lifecycle and develop reliable, flexible and scalable apps using Node.js and React.js. Ability to work independently and in a team-oriented environment. Qualifications: · A bachelor’s degree or equivalent in BE/B.Tech/ BCA/- Any Computer/ IT Degree · Strong English communication and written skills · 4-5 years of experience working in IT industry · Web development experience using Node JS and React JS · Ability to manage simultaneous projects and prioritize company and team needs. · Preferably have previously worked in a product company or startup. · Preferably have worked in multiple roles and teams simultaneously. Requirements At least FOUR years' experience as a Node.js developer and its frameworks (Express Js, Meteor JS, Next JS) Experience of front-end technologies such as React JS, Angular JS, Vue JS) Extensive knowledge of JavaScript, web stacks, libraries, and frameworks. Knowledge of front-end technologies such as HTML5 and CSS3. Ability to work as a team player and collaborate with cross functional teams. Experience with docker and Kubernetes. Experience with REST APIs, GraphQL APIs, JSON, XML and asynchronous API integration. Hands on expertise in Knowledge of Relational and non-relational databases such as PostGreSQL, MySQL, NoSQL, MongoDB, ElasticsearchDB, Redis. Having working knowledge and understanding of SSO Implementation, user authentication, session management and authorization between multiple systems, servers and environments. Knowledge of Agile/Scrum methodologies and tool such as Jira, AzureDevops. Proficient knowledge of code versioning tools such as Git. Good understanding and implementation of Unit Testing. Understanding CI/CD · Knowledge of microservices architecture. Responsibilities: Close co-ordination with Turkish team product owner, business analysts, developers, and QA team. Work as part of a team developing applications and services using Agile development methods. Developing the back-end software, customer facing UI, maintain and update the existing code. Developing and maintaining all server-side network components. Ensuring optimal performance of the central database and responsiveness to front-end requests. Assisting with creation and development of feature requirements. Contributing to team and organizational improvements in process and infrastructure. Effectively using tools and ingenuity to identify and fix defects before they become a problem. Developing high-performance applications by writing testable, reusable, efficient and clean well documented and commented code. Benefits Providing a market competitive salary that is applied through a consistent process, equitable for all our employees, and regularly reviewed based on industry data. Creating an inclusive environment where you can help shape the culture not just by fitting in, but by adding to it. Rewarding you with an annual performance-based bonus. Diwali Bonus Offering comprehensive Health/Accidental /Life Insurance. 15 paid leaves, 9 casual and sick, multiple all-company wellness days, close to 10-12 Indian Holidays, and for other life events. Key Skills: Node JS, React JS, Javascript, Typescript, Software Development Life cycle, application development, Git, MySQL, PostgreSQL, MongoDB, Elasticsearch, DOM, Backend, Frontend
Posted 1 week ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description At Attri, we are seeking a talented Fullstack engineer to join our dynamic team. We are a cutting-edge company, and we're looking for an individual who is passionate, inquisitive, and a self-learner to contribute to the success of our projects. RESPONSIBILITIES Modern Web Development: Proficiency in HTML5, CSS3, ES6+, Typescript, and Node.js, with a strong emphasis on staying up-to-date with the latest technologies. TypeScript: Hands on with Generics, Template Literals, Mapped Types, Conditional Types Flexible Approach: Based on problem at hand apply appropriate solution while considering all the risks FrontendReact.js and Flux Architecture: Extensive experience in React.js and Flux Architecture, along with external state management to build robust and performant web applications. JS Event Loop : Understanding of event loop, criticality of not blocking main thread, cooperative scheduling in react. State Management: Hands on with more than one state management library Ecosystem: Ability to leverage vast JS ecosystem and hands on with non-typical libraries. BackendSQL - Extensive hands on with Postgres with comfortable with json_agg, json_build_object, WITH CLAUSE, CTE, View/Materialized View, Transactions Redis - Hands-on with different data structures and usage. Architectural Patterns - Backend for Frontend, Background Workers, CQRS, Event Sourcing, Orchestration/Choreography, etc Transport Protocols , such as HTTP(S), SSE, and WS(S), to optimize data transfer and enhance application performance Serialization Protocols - JSON and at least one more protocol Authentication/Authorization - Comfortable with OAuth, JWT and other mechanisms for different use cases Comfortable with reading open source code of libraries in use and understanding of internals Able to fork the library to either improve, fix bug, or redesign Tooling : Knowledge of essential frontend tools like Prettier, ESLint, and Conventional Commit to maintain code quality and consistency. Dependency management and versioning Familiarity with CI/CD Testing: Utilize Jest/Vitest and React Testing Library for comprehensive testing of your code, ensuring high code quality and reliability. Collaboration : Collaborate closely with our design team to craft responsive and themable components for data-intensive applications, ensuring a seamless user experience. Programming Paradigms: Solid grasp of both Object-Oriented Programming and Functional Programming concepts to create clean and maintainable code. Design/Architectural Patterns: Identifying suitable design and architectural pattern to solve the problem at hand. Comfortable with tailoring the pattern to fit the problem optimally Modular and Reusable Code: Write modular, reusable, and testable code that enhances codebase maintainability. DSA: Basic understanding of DSA when required to optimize hot paths. GOOD TO HAVE: Python: Django Rest Framework, Celery, Pandas/Numpy, Langchain, Ollama Storybook: Storybook to develop components in isolation, streamlining the UI design and development process. Charting and Visualization : Experience with charting and visualization libraries, especially ECharts by Apache, to create compelling data representations. Tailwind CSS: Understanding of Tailwind CSS for efficient and responsive UI development. NoSQL Stores - ElasticSearch, Neo4j, Cassandra, Qdrant, etc. Functional Reactive Programming RabbitMQ/Kafka GREAT TO HAVE: Open Source Contribution: Experience in contributing to open-source projects (not limited to personal projects or forks) that showcases your commitment to the development community. Renderless/Headless React Components: Developing renderless or headless React components to provide flexible and reusable UI solutions. End-to-End Testing: Experience with Cypress or any other end-to-end (E2E) testing framework, ensuring the robustness and quality of the entire application. Deployment: Being target agnostic and understanding the nuances of application in operation. QUALIFICATIONS: Bachelor's degree in Computer Science, Information Technology, or a related field. 5+ years of relevant experience in frontend web development, including proficiency in HTML5, CSS3, ES6+, Typescript, React.js, and related technologies. Solid understanding of Object-Oriented Programming, Functional Programming, SOLID principles, and Design Patterns. Proven experience in developing modular, reusable, and testable code. Prior work on data-intensive applications and collaboration with design teams to create responsive and themable components. Experience with testing frameworks like Jest/Vitest and React Testing Library. Benefits Competitive Salary 💸 Support for continual learning (free books and online courses) 📚 Reimbursement for gym or physical activity of your choice 🏋🏽♀️ Leveling Up Opportunities 🌱
Posted 1 week ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Join our Team Job Summary: We are seeking a highly skilled and certified professional with deep expertise in Zabbix, ELK stack, modern monitoring & observability tools, and DevOps practices. The ideal candidate will have strong experience in infrastructure automation, 3rd-party integrations, multi-database environments, and cloud-native deployments. This role demands a proactive problem-solver with a passion for performance, reliability, and scalable solutions in hybrid and cloud environments. Key Responsibilities: Design, implement, and maintain robust monitoring solutions using Zabbix, Grafana, and the ELK stack (Elasticsearch, Logstash, Kibana). Develop automation scripts and CI/CD pipelines using Python, Golang, Jenkins, and GitOps practices. Collaborate with development and operations teams to implement observability standards across environments (on-prem & cloud). Maintain, optimize, and troubleshoot integrations with BMC Remedy, ServiceNow, or other ticketing systems. Manage and support diverse database environments: MariaDB, MongoDB, MySQL, PostgreSQL, Oracle. Provide support for both Linux and Windows based systems in production. Design and support microservice deployment workflows using GCP, GKE, and container orchestration tools. Work closely with platform and application teams to ensure end-to-end visibility and service reliability. Drive best practices for infrastructure-as-code, environment provisioning, and configuration management. Lead efforts on third-party integrations to enhance platform functionality and automation. Required Skills & Experience: Proven experience with Zabbix, ELK, and Grafana for real-time monitoring and logging. Hands-on experience with DevOps tools and CI/CD pipelines (Jenkins, Git, etc.). Strong scripting and automation experience in Python, Shell, Golang, or Java Spring Boot. Solid understanding of GCP services and GKE orchestration; GCP certification is highly preferred. In-depth experience working with databases like MongoDB, MySQL, MariaDB, PostgreSQL, Oracle. Working knowledge of BMC ITSM, Remedy, or similar ITSM/ticketing platforms. Proficiency in administering Linux and Windows environments. Experience with infrastructure monitoring, alerting, log management, and performance tuning. Familiarity with third-party integrations (REST APIs, Webhooks, etc.) and system interoperability. Strong understanding of cloud-native architectures, service reliability, and infrastructure scaling. Preferred Qualifications: GCP Professional or GKE certification. Experience with Terraform, Ansible, or similar IAC tools. Familiarity with container security and compliance frameworks. Exposure to enterprise-grade SRE/Observability practices. CP Professional Cloud Architect / GKE Certified Experience Level : 6+ Years Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770694
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
JOB PURPOSE: Reporting to the Director, DevSecOps & SRE , the DevSecOps Engineer will be responsible for: Design, implement and monitor enterprise-grade secure fault-tolerant infrastructure. Define and evolve Build & Release best practice by working within teams and educating the other stakeholder teams. In the role as a DevSecOps Engineer, we believe that you are bringing experience of Operations and Security using DevOps. Strong analytical and automation skills that enable you to deliver the expected benefits to the business and digital products. Building and deploying distributed applications and big data pipelines in the cloud brings you excitement. You will be working with GCP and AWS. Jenkins, Groovy scripting, Shell scripting, Terraform, Ansible or an equivalent are a wide array of tools that you have used in the past. This is an exciting opportunity to influence and build the DevSecOps framework for leading Manufacturing platforms in Autonomous Buildings space, while working with the latest technologies on a cloud-based environment in a multi-disciplinary team with platform architects, tech leads, data scientists, data engineers, and insight specialists . JOB RESPONSIBILITIES: Design, implement and monitor enterprise-grade secure fault-tolerant infrastructure Define and evolve Build & Release best practice by working within teams and educating the other stakeholder teams. These best practices should support traceability & auditability of change. Ensure continuous availability of various DevOps tools supporting SCM & Release Management including Source Control, Containerization, Continuous Integration, & Change Management. (Jenkins, Docker, JIRA, SonarQube, Terraform, Google/Azure/AWS Cloud CLI). Implementing Build and release automated pipelines framework Implementing DevSecOps Tools and Quality Gates with SLO Implementing SAST, DAST, IAST, OSS tools in CICD Pipelines Implementing Automated change management policies in the pipeline from Dev-Prod. Work with cross-functional co-located teams in design, development and implementation of enterprise scalable features related to enabling higher developer productivity, environment monitoring and self-healing, and facilitating autonomous delivery teams. Build infrastructure automation tools and frameworks leveraging Docker, Kubernetes operate as a technical expert on DevOps infrastructure projects pertaining to Containerization, systems management, design and architecture. Perform performance analysis and optimization, monitoring and problem resolution, upgrade planning and execution, and process creation and documentation. Integrate newly developed and existing applications into private, public and hybrid cloud environments Automate deployment pipelines in a scalable, secure and reliable manner Leverage application monitoring tools to troubleshoot and diagnose environment issues Have a culture of automation where any repetitive work is automated Define and evolve Build & Release best practice by working within teams and educating the other stakeholder teams. These best practices should support traceability & auditability of change. Working closely with Cloud Infrastructure and Security teams to ensure organizational best practices are followed Translating non-functional requirements of Development, Security, and Operations architectures into a design that can be implemented using the chosen set of software for the project. Ownership of technical design and implementation for one or more software stacks of the DevSecOps team. Design and implementation of the distributed code repository. Implementing automation pipelines to support code compilation, testing, and deployment into the software components of the entire solution. Integrating the monitoring of all software components in the entire solution, and data mining the data streams for actionable events to remediate issues. Implement configuration management pipelines to standardize environments. Integrate DevSecOps software with credentials management tools. Create non-functional test scenarios for verifying the DevSecOps software setup. KEY QUALIFICATION & EXPERIENCES: At least 5 years of relevant working experience in DevSecOps, Task Automation, or GitOps. Demonstrated proficiency in installation, configuration, or implementation in one or more of the following software. Jenkins, Azure DevOps, Bamboo, or software of similar capability. GitHub, GitLab, or software of similar capability. Jira, Asana, Trello, or software of similar capability. Ansible, Terraform, Chef Automate, or software of similar capability. Flux CD, or software of similar capability. Any test automation software. Any service virtualization software. Operating Software administration experience for Ubuntu, Debian, Alpine, RHEL. Technical documentation writing experience. DevOps Engineering certification for on-premises or public cloud is advantageous. Experience with work planning and effort estimation is an advantage. Strong problem-solving and analytical skills. Strong interpersonal and written and verbal communication skills. Highly adaptable to changing circumstances. Interest in continuously learning new skills and technologies. Experience with programming and scripting languages (e.g. Java, C#, C++, Python, Bash, PowerShell). Experience with incident and response management. Experience with Agile and DevOps development methodologies. Experience with container technologies and supporting tools (e.g. Docker Swarm, Podman, Kubernetes, Mesos). Experience with working in cloud ecosystems (Microsoft Azure, AWS, Google Cloud Platform). Experience with configuration management systems (e.g. Puppet, Ansible, Chef, Salt, Terraform). Experience working with continuous integration/continuous deployment tools (e.g. Git, TeamCity, Jenkins, Artifactory). Experience in GitOps-based automation is a plus. Experience with GitHub for Actions, GitHub for Security, GitHub Copilot. BE/B-Tech/MCA or any equivalent degree in Computer Science OR related practical experience. Must have 5+ years working experience in Jenkins, GCP (or AWS/Azure), Unix & Linux OS. Must have experience with automation/configuration management tools (Jenkins using Groovy scripting, Terraform, Ansible, or an equivalent). Must have experience in Kubernetes (GKE, Kubectl, Helm) and containers (Docker). Must have experience on JFrog Artifactory and SonarQube. Extensive knowledge of institutionalizing Agile and DevOps tools not limited to but including Jenkins, Subversion, Hudson, etc. Experience in Networking Skills (TCP/IP, SSL, SMTP, HTTP, FTP, DNS, and more). Hands-on in source code management tools like Git, Bitbucket, SVN, etc. Should have working experience with monitoring tools like Grafana, Prometheus, Elasticsearch, Splunk, or any other monitoring tools/processes. Experience in Enterprise High Availability Platforms and Network and Security on GCP. Knowledge and experience in the Java programming language. Experience working on large-scale distributed systems with a deep understanding of design impacts on performance, reliability, operations, and security is a big plus. Understanding of self-healing/immutable microservice-based architectures, cloud platforms, clustering models, networking technologies. Great interpersonal and communication skills. Self-starter and able to work well in a fast-paced, dynamic environment with minimal supervision. Must have Public Cloud provider certifications (Azure, GCP, or AWS). Having CNCF certification is a plus
Posted 1 week ago
9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Technical Architect - Cloud Location: Hinjewadi, Pune Experience: 9+ years BASIC PURPOSE: Lead Software Engineer - Cloud, who will be responsible for the plan, design, as well as deployment automation of platform solutions on AWS. Instrumental in profiling and improving front-end and back-end application performance, mentor team members and take end to end technical ownership of applications. Must be able to stay on top of technology changes in the market and continuously look for opportunities to leverage new technology. ESSENTIAL FUNCTIONS: · Design, build and implement performant and robust cloud platform solutions. · Design and build data pipelines for supporting analytical solutions. · Provide level of effort estimates to support planning activities. · Provide microservices architecture and design specifications. · Fix defects found during implementation process or reported by the software test team. · Support software process definition and improvement initiatives and release process working with DevOps team in CI/CD pipelines developed with Terraform and CDK as Infrastructure-as-Code. Execute security architectures for cloud systems. · Understand and recognize the quality consequences which may occur from the improper performance of their specific job; has awareness of system defects that may occur in their area of responsibility, including product design, verification, and validation, and testing activities. · Mentor less experienced team members. · Collaborate with Product Designers, Product Managers, Architect and Software Engineers to deliver compelling user-facing products. REPORTING RELATIONSHIPS : · Reports to Technical Architect QUALIFICATIONS: · Bachelor’s degree in computer science / related engineering field OR equivalent experience · In related field. · 10+ years of experience in cloud application development. · Expert proficiency in JavaScript / Typescript and/or Java with Spring Boot or Quarkus. · Experience in architecting and developing event driven cloud-based solutions. · Experience in AWS services including API Gateway, AppSync, Amplify, S3, CloudFront, Lambda, ECS/Fargate, Step Functions, SQS, Event Bridge, Cognito, Dynamo, Aurora PostgreSQL, OpenSearch/Elasticsearch, AWS Pinpoint. · Extensive experience in developing applications in POSIX compliant environments. · Strong knowledge of containerization, with expert knowledge of either Docker or Kubernetes. · Proficient in IAM security and AWS Networking. · Expert understand of building and working with CI/CD pipelines. · Experience in designing, developing and creating data pipelines, data warehouse applications and analytical solutions including machine learning. · Deep cloud domain expertise in: architecture, big data, microservice architectures, cloud technologies, data security and privacy, tools, and testing · Excellent programming skills in data pipeline technologies like Lambda, Kinesis, S3, EventBridge and MSK · Extensive experience with Service Oriented Architecture, microservices, virtualization and working with relational databases and non-relational databases. · Excellent knowledge of building big data solutions using NoSQL databases. · Experience with secure coding best practices and methodologies, vulnerability scans, threat modeling, and cyber-risk assessments. · Familiar with modern build pipelines and tools · Ability to understand business requirements and translate them into technical designs · Familiarity with Git code versioning tools · Good written, verbal communication skills · Great team player PREFERRED SKILLS: · Experience with RDBMS and is a plus · Experience in Java, .NET, Python is a plus · Experience in big data solutions and analytics; using BI tools like Power BI or AWS QuickSight is a plus · Experience with other cloud computing platforms · Azure or AWS Certification such as a Solutions Architect Expert, Azure Fundamentals, data scientist, developer, etc. CRITICAL COMPETENCIES FOR SUCCESS: Analytical Skills: Demonstrate aptitude towards analytical and problem-solving skills and the ability to conceptually pull together patterns or connections that are not clearly related; ability to assess relevant facts, identify alternative approaches and provide the best course of action. Strategic Agility: Eagerness and ability to learn quickly and leverage a flexible mindset in response to shifting dynamics, adversity, and/or change; continually pushes oneself, their teams, and their businesses to learn, to generate new ideas * Disciplined Execution: Orientation towards a process-focused, decisive course of action that will ensure client/customer needs are met with a high standard of excellence, urgency and predictability; focused on the task at hand in the face of ambiguity, and applies past experiences and expertise to consistently pull through results. * Organizational Collaboration: Ability to partner across organizational lines and work cooperatively within and outside one’s own team in order to best serve client needs and exceed the expectations of end customers and clients; actively supports key decisions and promote a spirit of teamwork to demonstrate the commitment to the company. WORK CONDITIONS: · Must possess comfort in learning, training, and engaging with others virtually through Microsoft Teams and Zoom · M ust be able to perform the essential functions of the job, with or without reasonable accommodation. Requirements · Bachelor’s degree in computer science / related engineering field OR equivalent experience · in related field. · 10+ years of experience in cloud application development. · Expert proficiency in JavaScript / Typescript and/or Java with Spring Boot or Quarkus. · Experience in architecting and developing event driven cloud-based solutions. · Experience in AWS services including API Gateway, AppSync, Amplify, S3, CloudFront, Lambda, ECS/Fargate, Step Functions, SQS, Event Bridge, Cognito, Dynamo, Aurora PostgreSQL, OpenSearch/Elasticsearch, AWS Pinpoint. · Extensive experience in developing applications in POSIX compliant environments. · Strong knowledge of containerization, with expert knowledge of either Docker or Kubernetes. · Proficient in IAM security and AWS Networking. · Expert understand of building and working with CI/CD pipelines. · Experience in designing, developing and creating data pipelines, data warehouse applications and analytical solutions including machine learning. · Deep cloud domain expertise in: architecture, big data, microservice architectures, cloud technologies, data security and privacy, tools, and testing · Excellent programming skills in data pipeline technologies like Lambda, Kinesis, S3, EventBridge and MSK · Extensive experience with Service Oriented Architecture, microservices, virtualization and working with relational databases and non-relational databases. · Excellent knowledge of building big data solutions using NoSQL databases. · Experience with secure coding best practices and methodologies, vulnerability scans, threat modeling, and cyber-risk assessments. · Familiar with modern build pipelines and tools · Ability to understand business requirements and translate them into technical designs · Familiarity with Git code versioning tools · Good written, verbal communication skills · Great team player Benefits Health Insurance
Posted 1 week ago
3.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
About QualityKiosk Technologies QualityKiosk Technologies is one of the world's largest independent Quality Engineering (QE) providers and digital transformation enablers, helping companies build and manage applications for optimal performance and user experience. Founded in 2000, the company specializes in providing quality engineering, QA automation, performance assurance, intelligent automation (IA) and robotic process automation (RPA), customer experience management, site reliability engineering (SRE), digital testing as a service (DTaaS), cloud, and data analytics solutions and services. With operations spread across 25+ countries and a workforce of more than 4000 employees, the organization enables some of the leading banking, e-commerce, automotive, telecom, insurance, OTT, entertainment, pharmaceuticals, and BFSI brands to achieve their business transformation goals. QualityKiosk Technologies has been featured in renowned global advisory firms' reports, including Forrester, Gartner, The Everest Group, and Hurun Report, for its innovative, IP-led quality assurance solutions and the positive impact it has created for its clients in the fast-changing digital landscape. QualityKiosk, which offers automated quality assurance solutions for clients across geographies and verticals, counts 50 of the Indian Fortune 100 companies and 18 of the global Fortune 500 companies as its key clients. The company is banking on its speed of execution and technology advancement as key factors to drive a 5X growth in the next five years, both in revenues and number of employees. Key Responsibilities: - Design, implement, and optimize Elasticsearch clusters and associated applications. - Develop and maintain scalable and fault-tolerant search architectures to support large-scale data. - Troubleshoot performance and reliability issues within Elasticsearch environments. - Integrate Elasticsearch with other tools like Logstash, Kibana, Beats, etc. - Implement search features such as auto-complete, aggregations, fuzziness, and advanced search functionalities. - Manage Elasticsearch data pipelines and work on data ingest, indexing, and transformation. - Monitor, optimize, and ensure the health of Elasticsearch clusters and associated services. - Conduct capacity planning and scalability testing for search infrastructure. - Ensure high availability and disaster recovery strategies for Elasticsearch clusters. - Collaborate with software engineers, data engineers, and DevOps teams to ensure smooth deployment and integration. - Document solutions, configurations, and best practices. - Stay updated on new features and functionalities of the Elastic Stack and apply them to enhance existing systems. - Grooming of freshers and junior team members to enable them to take up responsibilities Required Skills & Qualifications: - Experience: 3 years of hands-on experience with Elasticsearch and its ecosystem (Elasticsearch, Kibana, Logstash, Fleet server, Elastic agents, Beats). - Core Technologies: Strong experience with Elasticsearch, including cluster setup, configuration, and optimization. - Search Architecture: Experience designing and maintaining scalable search architectures and handling large datasets.
Posted 1 week ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Associate Software Developer As a Fullstack SDE1 at NxtWave, you Get first hand experience of building applications and see them released quickly to the NxtWave learners (within weeks) Get to take ownership of the features you build and work closely with the product team Work in a great culture that continuously empowers you to grow in your career Enjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster) NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidly Build in a world-class developer environment by applying clean coding principles, code architecture, etc. Responsibilities Design, implement, and ship user-centric features spanning frontend, backend, and database systems under guidance. Define and implement RESTful/GraphQL APIs and efficient, scalable database schemas. Build reusable, maintainable frontend components using modern state management practices. Develop backend services in Node.js or Python, adhering to clean-architecture principles. Write and maintain unit, integration, and end-to-end tests to ensure code quality and reliability. Containerize applications and configure CI/CD pipelines for automated builds and deployments. Enforce secure coding practices, accessibility standards (WCAG), and SEO fundamentals. Collaborate effectively with Product, Design, and engineering teams to understand and implement feature requirements.. Own feature delivery from planning through production, and mentor interns or junior developers. Qualifications & Skills 1+ years of experience building full-stack web applications. Proficiency in JavaScript (ES6+), TypeScript, HTML5, and CSS3 (Flexbox/Grid). Advanced experience with React (Hooks, Context, Router) or equivalent modern UI framework. Hands-on with state management patterns (Redux, MobX, or custom solutions). Strong backend skills in Node.js (Express/Fastify) or Python (Django/Flask/FastAPI). Expertise in designing REST and/or GraphQL APIs and integrating with backend services. Solid knowledge of MySQL/PostgreSQL and familiarity with NoSQL stores (Elasticsearch, Redis). Experience using build tools (Webpack, Vite), package managers (npm/Yarn), and Git workflows. Skilled in writing and maintaining tests with Jest, React Testing Library, Pytest, and Cypress. Familiar with Docker, CI / CD tools (GitHub Actions, Jenkins), and basic cloud deployments. Product-first thinker with strong problem-solving, debugging, and communication skills. Qualities we'd love to find in you: The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software Strong collaboration abilities and a flexible & friendly approach to working with teams Strong determination with a constant eye on solutions Creative ideas with problem solving mind-set Be open to receiving objective criticism and improving upon it Eagerness to learn and zeal to grow Strong communication skills is a huge plus Work Location: Hyderabad About NxtWave NxtWave is one of India’s fastest-growing ed-tech startups, revolutionizing the 21st-century job market. NxtWave is transforming youth into highly skilled tech professionals through its CCBP 4.0 programs, regardless of their educational background. NxtWave is founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur). Supported by Orios Ventures, Better Capital, and Marquee Angels, NxtWave raised $33 million in 2023 from Greater Pacific Capital. As an official partner for NSDC (under the Ministry of Skill Development & Entrepreneurship, Govt. of India) and recognized by NASSCOM, NxtWave has earned a reputation for excellence. Some of its prestigious recognitions include: Technology Pioneer 2024 by the World Economic Forum, one of only 100 startups chosen globally ‘Startup Spotlight Award of the Year’ by T-Hub in 2023 ‘Best Tech Skilling EdTech Startup of the Year 2022’ by Times Business Awards ‘The Greatest Brand in Education’ in a research-based listing by URS Media NxtWave Founders Anupam Pedarla and Sashank Gujjula were honoured in the 2024 Forbes India 30 Under 30 for their contributions to tech education NxtWave breaks learning barriers by offering vernacular content for better comprehension and retention. NxtWave now has paid subscribers from 650+ districts across India. Its learners are hired by over 2000+ companies including Amazon, Accenture, IBM, Bank of America, TCS, Deloitte and more. Know more about NxtWave: https://www.ccbp.in Read more about us in the news – Economic Times | CNBC | YourStory | VCCircle
Posted 1 week ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Associate Software Developer As a Fullstack SDE1 at NxtWave, you Get first hand experience of building applications and see them released quickly to the NxtWave learners (within weeks) Get to take ownership of the features you build and work closely with the product team Work in a great culture that continuously empowers you to grow in your career Enjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster) NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidly Build in a world-class developer environment by applying clean coding principles, code architecture, etc. Responsibilities Design, implement, and ship user-centric features spanning frontend, backend, and database systems under guidance. Define and implement RESTful/GraphQL APIs and efficient, scalable database schemas. Build reusable, maintainable frontend components using modern state management practices. Develop backend services in Node.js or Python, adhering to clean-architecture principles. Write and maintain unit, integration, and end-to-end tests to ensure code quality and reliability. Containerize applications and configure CI/CD pipelines for automated builds and deployments. Enforce secure coding practices, accessibility standards (WCAG), and SEO fundamentals. Collaborate effectively with Product, Design, and engineering teams to understand and implement feature requirements.. Own feature delivery from planning through production, and mentor interns or junior developers. Qualifications & Skills 1+ years of experience building full-stack web applications. Proficiency in JavaScript (ES6+), TypeScript, HTML5, and CSS3 (Flexbox/Grid). Advanced experience with React (Hooks, Context, Router) or equivalent modern UI framework. Hands-on with state management patterns (Redux, MobX, or custom solutions). Strong backend skills in Node.js (Express/Fastify) or Python (Django/Flask/FastAPI). Expertise in designing REST and/or GraphQL APIs and integrating with backend services. Solid knowledge of MySQL/PostgreSQL and familiarity with NoSQL stores (Elasticsearch, Redis). Experience using build tools (Webpack, Vite), package managers (npm/Yarn), and Git workflows. Skilled in writing and maintaining tests with Jest, React Testing Library, Pytest, and Cypress. Familiar with Docker, CI / CD tools (GitHub Actions, Jenkins), and basic cloud deployments. Product-first thinker with strong problem-solving, debugging, and communication skills. Qualities we'd love to find in you: The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software Strong collaboration abilities and a flexible & friendly approach to working with teams Strong determination with a constant eye on solutions Creative ideas with problem solving mind-set Be open to receiving objective criticism and improving upon it Eagerness to learn and zeal to grow Strong communication skills is a huge plus Work Location: Hyderabad About NxtWave NxtWave is one of India’s fastest-growing ed-tech startups, revolutionizing the 21st-century job market. NxtWave is transforming youth into highly skilled tech professionals through its CCBP 4.0 programs, regardless of their educational background. NxtWave is founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur). Supported by Orios Ventures, Better Capital, and Marquee Angels, NxtWave raised $33 million in 2023 from Greater Pacific Capital. As an official partner for NSDC (under the Ministry of Skill Development & Entrepreneurship, Govt. of India) and recognized by NASSCOM, NxtWave has earned a reputation for excellence. Some of its prestigious recognitions include: Technology Pioneer 2024 by the World Economic Forum, one of only 100 startups chosen globally ‘Startup Spotlight Award of the Year’ by T-Hub in 2023 ‘Best Tech Skilling EdTech Startup of the Year 2022’ by Times Business Awards ‘The Greatest Brand in Education’ in a research-based listing by URS Media NxtWave Founders Anupam Pedarla and Sashank Gujjula were honoured in the 2024 Forbes India 30 Under 30 for their contributions to tech education NxtWave breaks learning barriers by offering vernacular content for better comprehension and retention. NxtWave now has paid subscribers from 650+ districts across India. Its learners are hired by over 2000+ companies including Amazon, Accenture, IBM, Bank of America, TCS, Deloitte and more. Know more about NxtWave: https://www.ccbp.in Read more about us in the news – Economic Times | CNBC | YourStory | VCCircle
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You’ll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you’ll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you’ll tackle obstacles related to database integration and untangle complex, unstructured data sets. In This Role, Your Responsibilities May Include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviour’s. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Preferred Education Master's Degree Required Technical And Professional Expertise Expertise in designing and implementing scalable data warehouse solutions on Snowflake, including schema design, performance tuning, and query optimization. Strong experience in building data ingestion and transformation pipelines using Talend to process structured and unstructured data from various sources. Proficiency in integrating data from cloud platforms into Snowflake using Talend and native Snowflake capabilities. Hands-on experience with dimensional and relational data modelling techniques to support analytics and reporting requirements Preferred Technical And Professional Experience Understanding of optimizing Snowflake workloads, including clustering keys, caching strategies, and query profiling. Ability to implement robust data validation, cleansing, and governance frameworks within ETL processes. Proficiency in SQL and/or Shell scripting for custom transformations and automation tasks
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |