Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
6 - 7 Lacs
India
On-site
Job Title: DevOps Engineer (3+ Years) — On-Site in Indore Location: Indore (On-Site) Job Type: Full-Time About the Role: We are expanding our team with passionate DevOps Engineers to manage, automate, and scale our cloud infrastructure. If you enjoy solving complex operational challenges with smart, scalable solutions — we want to hear from you! Key Responsibilities: Design, implement, and maintain CI/CD pipelines. Manage containerization using Docker and Kubernetes. Automate infrastructure with Ansible, Terraform, and similar tools. Monitor, troubleshoot, and optimize cloud environments (Azure). Collaborate with cross-functional teams to deliver secure, reliable solutions. Document processes clearly and maintain best practices. Required Qualifications: 3+ years as a DevOps Engineer. Azure DevOps, Azure services, Kubernetes, Python, Elasticsearch, Bash scripting. Experience with Docker, Kubernetes, Ansible, Terraform. Familiar with Git, Git Flow, and continuous integration tools. Basic understanding of ETL/ELT, data modeling, warehousing. Good troubleshooting & documentation skills. Job Types: Full-time, Contractual / Temporary Pay: ₹50,012.10 - ₹65,931.94 per month Schedule: Day shift Fixed shift Monday to Friday Application Question(s): Do you have a minimum of 3 years of proven experience working as a DevOps Engineer? Which of the following tools and technologies do you have hands-on experience with Azure DevOps, Azure Services, Kubernetes, Docker, Ansible, Terraform, Python, Elasticsearch, Bash Scripting, CI/CD pipelines (Git, Git Flow), ETL/ELT, data modeling, or data warehousing? Are you comfortable working on-site at our Indore location? Are you available to join immediately (within 1–2 days)? Work Location: In person Speak with the employer +91 7880090179
Posted 3 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role: Passionate about building, owning and operating massively scalable systems? Experienced in being in a team of highly competent software engineers impacting millions of lives around you? If so, here is an opportunity tailored for you. As a SDE, you will interact with the entire engineering team to solve problems and build, extend, optimize and refactor the back-end architecture of our growing product. This will involve working on our existing codebase along with new projects. Airtel Xtelify has a highly passionate engineering-driven environment where your contribution will be felt immediately. All teams at Xtelify are involved in every part of our development life cycle. Out of many tools and technologies, we use Golang, Java, Postgres, Aerospike, Redis, Kafka extensively for our back-end development. Responsibilities: Developing a highly-concurrent and distributed system Performance optimization and problem diagnosis Designing/Developing for high-availability Designing/Developing and testing new features Supporting release and documentation of developed features Estimating the effort required to develop and implement Help define coding standards and development processes Willing to learn & adapt different technologies Skill Sets: Have extensive hands-on experience in Golang, in production-grade systems Experience dealing with highly concurrent, distributed architectures / systems Strong Data Structures & Algorithms concepts Comfortable working in a nix environment & using cli. Experience with building HTTP and GRPC based services Willingness to get hands dirty and not afraid of low-level details Ability to carefully break down the problem into small pieces Ability to effectively communicate problems and solutions to the different team members Be able to debug non-trivial application code. Be able to write clear, concise source-code documentation, unit and integration tests. Be able to think beyond code to architecture and user experience. Experience with SQL and NoSQL databases like MySQL/Postgres, Redis, REST, Elasticsearch/MongoDB Proficient with code versioning tools, such as Git. Familiarity with Deployment on Cloud (AWS, GCP) with Jenkins, Ansible, Consul, Nats will be plus Familiarity with frameworks/tools like statsd, Open tracing, Prometheus will be plus
Posted 3 weeks ago
3.0 years
16 - 20 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
3.0 years
16 - 20 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
3.0 years
16 - 20 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
6.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
Company Description Wiser Solutions is a suite of in-store and eCommerce intelligence and execution tools. We're on a mission to enable brands, retailers, and retail channel partners to gather intelligence and automate actions to optimize in-store and online pricing, marketing, and operations initiatives. Our Commerce Execution Suite is available globally. Job Description We are looking for a highly capable Senior Full Stack engineer to be a core contributor in developing our suite of product offerings. If you love working on complex problems, and writing clean code, you will love this role. Our goal is to solve a messy problem elegantly and cost effectively. Our job is to collect, categorize, and analyze semi-structured data from different sources (20 million+ products from 500+ websites into our catalog of 500 million+ products). We help our customers discover new patterns in their data that can be leveraged so that they can become more competitive and increase their revenue. Essential Functions: Think like our customers – you will work with product and engineering leaders to define intuitive solutions Designing customer-facing UI and back-end services for various business processes. Developing high-performance applications by writing testable, reusable, and efficient code. Implementing effective security protocols, data protection measures, and storage solutions. Improve the quality of our solutions – you will hold yourself and your team members accountable to writing high quality, well-designed, maintainable software Own your work – you will take responsibility to shepherd your projects from idea through delivery into production Bring new ideas to the table – some of our best innovations originate within the team Guiding and mentoring others on the team Technologies We Use: Languages: NodeJS/NestJS/Typescript, SQL, React/Redux, GraphQL Infrastructure: AWS, Docker, Kubernetes, Terraform, GitHub Actions, ArgoCD Databases: Postgres, MongoDB, Redis, Elasticsearch, Trino, Iceberg Streaming and Queuing: Kafka, NATS, Keda Qualifications 6+ years of professional software engineering/development experience. Proficiency with architecting and delivering solutions within a distributed software platform Full stack engineering experience, including front end frameworks (React/Typescript, Redux) and backend technologies such as NodeJS/NestJS/Typescript, GraphQL Proven ability to learn quickly, make pragmatic decisions, and adapt to changing business needs Proven ability to work and effectively, prioritize and organize your work in a highly dynamic environment Proven track record of working in highly distributed event driven systems. Strong proficiency working of RDMS/NoSQL/Big Data solutions (Postgres, MongoDB, Trino, etc.) Solid understanding of Data Pipeline and Workflow Automation – orchestration tools, scheduling and monitoring Solid understanding of ETL/ELT and OLTP/OLAP concepts Solid understanding of Data Lakes, Data Warehouses, and modeling practices (Data Vault, etc.) and experience leveraging data lake solutions (e.g. AWS Glue, DBT, Trino, Iceberg, etc.) . Ability to clean, transform, and aggregate data using SQL or scripting languages Ability to design and estimate tasks, coordinate work with other team members during iteration planning Solid understanding of AWS, Linux and infrastructure concepts Track record of lifting and challenging teammates to higher levels of achievement Experience measuring, driving and improving the software engineering process Good testing habits and strong eye for quality. Outstanding organizational, communication, and relationship building skills conducive to driving consensus; able to work well in a cross-functional environment Experience working in an agile team environment Ownership – feel a sense of personal accountability/responsibility to drive execution from start to finish. Drive adoption of Wiser's Product Delivery organization principles across the department. Bonus Points Experience with CQRS Experience with Domain Driven Design Experience with C4 modeling Experience working within a retail or ecommerce environment Experience with AI Coding Agents (Windsurf, Cursor, Claude, ChatGPT, etc) – Prompt Engineering Why Join Wiser Solutions? Work on an industry-leading product trusted by top retailers and brands. Be at the forefront of pricing intelligence and data-driven decision-making. A collaborative, fast-paced environment where your impact is tangible. Competitive compensation, benefits, and career growth opportunities. Additional Information EEO STATEMENT Wiser Solutions, Inc. is an Equal Opportunity Employer and prohibits Discrimination, Harassment, and Retaliation of any kind. Wiser Solutions, Inc. is committed to the principle of equal employment opportunity for all employees and applicants, providing a work environment free of discrimination, harassment, and retaliation. All employment decisions at Wiser Solutions, Inc. are based on business needs, job requirements, and individual qualifications, without regard to race, color, religion, sex, national origin, family or parental status, disability, genetics, age, sexual orientation, veteran status, or any other status protected by the state, federal, or local law. Wiser Solutions, Inc. will not tolerate discrimination, harassment, or retaliation based on any of these characteristics
Posted 3 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Who are we? Smarsh empowers its customers to manage risk and unleash intelligence in their digital communications. Our growing community of over 6500 organizations in regulated industries counts on Smarsh every day to help them spot compliance, legal or reputational risks in 80+ communication channels before those risks become regulatory fines or headlines. Relentless innovation has fueled our journey to consistent leadership recognition from analysts like Gartner and Forrester, and our sustained, aggressive growth has landed Smarsh in the annual Inc. 5000 list of fastest-growing American companies since 2008. About Enterprise Archive Enterprise Archive is a cloud-based platform that stores and handles (archive/ search/discovery) over peta bytes of data. It uses cutting cloud scale (like Elastic Search, Mongo DB, Storm, Kafka, Hazelcast) to solve very complex storage problems at scale. Qualifications And Experience Bachelor’s or Master’s degree in Engineering or MCA, with demonstrated expertise in Enterprise Application Development 3 to 6 years of extensive hands-on software development experience Proficiency in Angular, TypeScript, HTML5, and CSS3 Practical experience with unit testing frameworks such as Karma and Jasmine Familiarity with Micro Frontend architectural pattern Strong foundational knowledge of Core Java concepts and APIs Hands-on experience with Spring Boot framework Proven experience working within Agile/Scrum development processes Preferred Qualifications Understanding of Microservices architecture and design principles Experience with NoSQL databases such as MongoDB and search engines like Elasticsearch Familiarity with cloud-native application development, deployment, and management About Our Culture Smarsh hires lifelong learners with a passion for innovating with purpose, humility and humor. Collaboration is at the heart of everything we do. We work closely with the most popular communications platforms and the world’s leading cloud infrastructure platforms. We use the latest in AI/ML technology to help our customers break new ground at scale. We are a global organization that values diversity, and we believe that providing opportunities for everyone to be their authentic self is key to our success. Smarsh leadership, culture, and commitment to developing our people have all garnered Comparably.com Best Places to Work Awards. Come join us and find out what the best work of your career looks like.
Posted 3 weeks ago
7.0 years
0 Lacs
India
On-site
Glowingbud is seeking a Senior Backend Developer with 7+ years of experience to develop and maintain high-performance APIs and scalable backend systems. The ideal candidate must have expertise in Node.js and MongoDB, experience in Multi-tenant SaaS applications, and a strong understanding of microservices architecture. Additionally, experience handling large-scale data and high-traffic production systems is crucial. This role requires close collaboration with frontend developers, DevOps, and product teams to build robust, scalable, and efficient backend solutions. About Company Glowingbud is a rapidly growing eSIM services platform that simplifies connectivity with powerful APIs, robust B2B and B2C interfaces, and seamless integrations with Telna. Our platform enables global eSIM lifecycle management, user onboarding, secure payment systems, and scalable deployments. Recently acquired by Telna (https://www.telna.com), we are expanding our product offerings and team to meet increasing demand and innovation goals. Key Responsibilities API Development: Design, develop, optimize, and maintain high-performance RESTful APIs using Node.js and MongoDB. Scalability & Performance: Optimize backend performance for handling large data volumes and high-traffic production environments. Multi-Tenant SaaS: Develop and maintain multi-tenant architectures ensuring data policies, security, scalability, and efficiency. Microservices Architecture: Design and implement microservices-based solutions, ensuring modularity and maintainability. Database Management: Proficiently manage and optimize MongoDB, including indexing, aggregation, and performance tuning. System Engineering: Work with DevOps to ensure scalability, reliability, and security of backend systems. Product Development: Collaborate with product teams to build long-term, scalable backend solutions. Code Quality & Security: Write clean, maintainable, and secure code following industry best practices. Enforce coding standards, conduct detailed code reviews. Monitoring & Debugging: Implement logging, monitoring, and debugging tools to ensure system reliability. Collaboration: Work closely with frontend teams and DevOps to ensure seamless API integrations and deployments. Qualifications 7+ years of experience in backend development with Node.js and MongoDB. Strong understanding of microservices architecture and system design principles. Experience in building and maintaining multi-tenant SaaS applications. Proven experience handling large-scale data and high-traffic production systems. Proficiency in MongoDB, including schema design, indexing strategies, and performance optimization. Experience with event-driven architecture and messaging queues (e.g., AWS SQS, RabbitMQ, Kafka). Knowledge of authentication and authorization mechanisms (JWT, AWS Cognito, SSO). Strong experience with API development best practices, security, and rate limiting. Familiarity with containerization (Docker, Kubernetes) and CI/CD pipelines. Proficiency in cloud services (AWS, GCP, or Azure) for backend infrastructure. Preferred Skills Experience with Redis, ElasticSearch, or other caching mechanisms. Knowledge of serverless architectures and cloud-native development. Exposure to REST / gRPC for API communication. Understanding of data streaming and real-time processing. Experience with NoSQL and relational database hybrid architectures. Familiarity with observability tools (Prometheus, Grafana, ELK Stack). Experience in automated testing for backend systems. Skills: multi-tenant saas applications,multi-tenant,rest,restful webservices,logging and monitoring tools,aws,api,ci/cd pipelines,database management,microservices,api development,microservices architecture,architecture,node.js,mongodb,design,containerization (docker, kubernetes),event-driven architecture,cloud services (aws, gcp, azure),authentication and authorization mechanisms
Posted 3 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Description The contextualization platform enables large-scale data integration and entity matching across heterogeneous sources. The current engineering focus is to modernize the architecture for better scalability and orchestration compatibility, refactor core services, and lay the foundation for future AI-based enhancements. This is a pivotal development initiative with clear roadmap milestones and direct alignment with a multi-year digital transformation strategy. Requirements We are looking for a skilled and motivated Senior Backend Engineer with strong expertise in Kotlin to join a newly established scrum team responsible for enhancing a core data contextualization platform. This service plays a central role in associating and matching data from diverse sources – time series, equipment, documents, 3D objects – into a unified data model. You will lead backend development efforts to modernize and scale the platform by integrating with an updated data architecture and orchestration framework. This is a high-impact role contributing to a long-term roadmap focused on scalable, maintainable, and secure industrial software. Key Responsibilities: Design, develop, and maintain scalable, API-driven backend services using Kotlin. Align backend systems with modern data modeling and orchestration standards. Collaborate with engineering, product, and design teams to ensure seamless integration across the broader data platform. Implement and refine RESTful APIs following established design guidelines. Participate in architecture planning, technical discovery, and integration design for improved platform compatibility and maintainability. Conduct load testing, improve unit test coverage, and contribute to reliability engineering efforts. Drive software development best practices including code reviews, documentation, and CI/CD process adherence. Ensure compliance with multi-cloud design standards and use of infrastructure-as-code tooling (Kubernetes, Terraform). Qualifications: 5+ years of backend development experience, with a strong focus on Kotlin Proven ability to design and maintain robust, API-centric microservices. Hands-on experience with Kubernetes-based deployments, cloud-agnostic infrastructure, and modern CI/CD workflows. Solid knowledge of PostgreSQL, Elasticsearch, and object storage systems. Strong understanding of distributed systems, data modeling, and software scalability principles. Excellent communication skills and ability to work in a cross-functional, English-speaking environment. Bachelor’s or Master’s degree in Computer Science or related discipline. Bonus Qualifications: Experience with Python for auxiliary services, data processing, or SDK usage. Knowledge of data contextualization or entity resolution techniques. Familiarity with 3D data models, industrial data structures, or hierarchical asset relationships. Exposure to LLM-based matching or AI-enhanced data processing (not required but a plus). Experience with Terraform, Prometheus, and scalable backend performance testing. Job responsibilities About The Role And Key Responsibilities Develop Data Fusion – a robust, state-of-the-art SaaS for industrial data. Solve concrete industrial data problems by designing and implementing delightful APIs and robust services on top of Data Fusion. Examples include integrating data sources into our platform in a secure and scalable way and enabling high-performance data science pipelines. Work with application teams to ensure a delightful user experience that helps the user solve complex real-world problems that have yet to be solved before. Work with distributed open-source software such as Kubernetes, Kafka, Spark and similar to build scalable and performant solutions. Work with databases or storage systems such as PostgreSQL, Elasticsearch or S3-API-compatible blob stores. Help shape the culture and methodology of a rapidly growing company. What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.
Posted 3 weeks ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Description The contextualization platform enables large-scale data integration and entity matching across heterogeneous sources. The current engineering focus is to modernize the architecture for better scalability and orchestration compatibility, refactor core services, and lay the foundation for future AI-based enhancements. This is a pivotal development initiative with clear roadmap milestones and direct alignment with a multi-year digital transformation strategy. Requirements We are looking for a skilled and motivated Senior Backend Engineer with strong expertise in Kotlin to join a newly established scrum team responsible for enhancing a core data contextualization platform. This service plays a central role in associating and matching data from diverse sources – time series, equipment, documents, 3D objects – into a unified data model. You will lead backend development efforts to modernize and scale the platform by integrating with an updated data architecture and orchestration framework. This is a high-impact role contributing to a long-term roadmap focused on scalable, maintainable, and secure industrial software. Key Responsibilities: Design, develop, and maintain scalable, API-driven backend services using Kotlin. Align backend systems with modern data modeling and orchestration standards. Collaborate with engineering, product, and design teams to ensure seamless integration across the broader data platform. Implement and refine RESTful APIs following established design guidelines. Participate in architecture planning, technical discovery, and integration design for improved platform compatibility and maintainability. Conduct load testing, improve unit test coverage, and contribute to reliability engineering efforts. Drive software development best practices including code reviews, documentation, and CI/CD process adherence. Ensure compliance with multi-cloud design standards and use of infrastructure-as-code tooling (Kubernetes, Terraform). Qualifications: 7+ years of backend development experience, with a strong focus on Kotlin Proven ability to design and maintain robust, API-centric microservices. Hands-on experience with Kubernetes-based deployments, cloud-agnostic infrastructure, and modern CI/CD workflows. Solid knowledge of PostgreSQL, Elasticsearch, and object storage systems. Strong understanding of distributed systems, data modeling, and software scalability principles. Excellent communication skills and ability to work in a cross-functional, English-speaking environment. Bachelor’s or Master’s degree in Computer Science or related discipline. Bonus Qualifications: Experience with Python for auxiliary services, data processing, or SDK usage. Knowledge of data contextualization or entity resolution techniques. Familiarity with 3D data models, industrial data structures, or hierarchical asset relationships. Exposure to LLM-based matching or AI-enhanced data processing (not required but a plus). Experience with Terraform, Prometheus, and scalable backend performance testing. Job responsibilities About The Role And Key Responsibilities Develop Data Fusion – a robust, state-of-the-art SaaS for industrial data. Solve concrete industrial data problems by designing and implementing delightful APIs and robust services on top of Data Fusion. Examples include integrating data sources into our platform in a secure and scalable way and enabling high-performance data science pipelines. Work with application teams to ensure a delightful user experience that helps the user solve complex real-world problems that have yet to be solved before. Work with distributed open-source software such as Kubernetes, Kafka, Spark and similar to build scalable and performant solutions. Work with databases or storage systems such as PostgreSQL, Elasticsearch or S3-API-compatible blob stores. Help shape the culture and methodology of a rapidly growing company. What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.
Posted 3 weeks ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Location: India- Pune (Amdocs Site) In one sentence Secures enterprise information by developing, implementing, and enforcing security controls, safeguards, policies, and procedures. All you need is... Bachelor’s degree in computer science, Information Security, or related field (or equivalent experience). 3+ years of hands-on experience in information security, with a focus on threat detection, penetration testing, and AI-driven security solutions. Demonstrated experience working in financial or SaaS security environments (e.g., PCI DSS, SOC 2, ISO 27001). Advanced knowledge of networking protocols, encryption, firewalls, IDS/IPS, and VPNs. Strong experience with cloud platforms (AWS, GCP, or Azure), including security configurations, monitoring, and automation. Hands-on experience with security tools such as EDR, SIEM (Splunk, ElasticSearch, etc.), vulnerability scanners (Nessus, Qualys), and threat intelligence platforms. Practical experience in penetration testing (e.g., OWASP Top 10, API testing) and red teaming. Expertise in scripting languages (Python, PowerShell) and automation tools. Security Certifications: CEH (Certified Ethical Hacker), CISSP, CISA, or equivalent certifications (required). Additional certifications in cloud security (AWS Certified Security Specialty, etc.) or AI/ML for security (optional but preferred). What will your job look like? Proactively monitor and assess emerging threats using advanced AI-driven tools. Analyze identified threats and develop effective remediation plans to minimize risk to critical systems and data. Lead proactive threat hunts leveraging AI, machine learning models, and automation tools. Identify Indicators of Compromise (IOCs) and detect patterns to anticipate future attacks. Perform advanced penetration testing exercises to identify vulnerabilities, misconfigurations, and weaknesses in systems. Collaborate in purple team exercises to validate security measures and improve resilience. Participate in risk assessments, ensuring compliance with financial industry regulations (e.g., PCI DSS, SOC 2) and internal security policies. Provide guidance on mitigating risks through the integration of AI-based security solutions. Lead the investigation and response to security incidents. Utilize machine learning and EDR tools to perform in-depth analysis of malware, root causes, and attack methodologies. Conduct continuous monitoring using SIEM (Security Information and Event Management), AI-driven anomaly detection systems, and advanced analytics tools to detect and respond to security events. Collaborate with SecDevOps and Engineering teams to automate security controls, incident responses, and vulnerability management using AI and advanced scripting (Python, PowerShell). Work closely with teams across the organization to integrate security at every stage of development (DevSecOps), ensuring secure cloud infrastructure, services, and APIs. Deep involvement in securing public cloud environments (AWS, Azure, GCP), leveraging AI tools to detect misconfigurations, vulnerabilities, and unauthorized access attempts. Support penetration testing efforts, identifying vulnerabilities within cloud and on-premise infrastructure. Lead and contribute to purple team engagements to test and improve defensive capabilities. Stay current with the latest AI, machine learning, and cybersecurity trends. Actively research emerging threats and innovative tools to protect the organization’s assets. Evaluate and implement third-party security tools, AI-based solutions, and threat intelligence platforms to enhance security posture and detection capabilities. Use AI and behavioral analytics to proactively detect threats that evade traditional security solutions. Develop custom threat detection algorithms where needed. Leverage threat intelligence feeds, machine learning models, and threat-hunting tools to proactively identify and mitigate risks from advanced persistent threats (APTs).
Posted 3 weeks ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Description The contextualization platform enables large-scale data integration and entity matching across heterogeneous sources. The current engineering focus is to modernize the architecture for better scalability and orchestration compatibility, refactor core services, and lay the foundation for future AI-based enhancements. This is a pivotal development initiative with clear roadmap milestones and direct alignment with a multi-year digital transformation strategy. Requirements We are looking for a skilled and motivated Senior Backend Engineer with strong expertise in Kotlin to join a newly established scrum team responsible for enhancing a core data contextualization platform. This service plays a central role in associating and matching data from diverse sources – time series, equipment, documents, 3D objects – into a unified data model. You will lead backend development efforts to modernize and scale the platform by integrating with an updated data architecture and orchestration framework. This is a high-impact role contributing to a long-term roadmap focused on scalable, maintainable, and secure industrial software. Key Responsibilities: Design, develop, and maintain scalable, API-driven backend services using Kotlin. Align backend systems with modern data modeling and orchestration standards. Collaborate with engineering, product, and design teams to ensure seamless integration across the broader data platform. Implement and refine RESTful APIs following established design guidelines. Participate in architecture planning, technical discovery, and integration design for improved platform compatibility and maintainability. Conduct load testing, improve unit test coverage, and contribute to reliability engineering efforts. Drive software development best practices including code reviews, documentation, and CI/CD process adherence. Ensure compliance with multi-cloud design standards and use of infrastructure-as-code tooling (Kubernetes, Terraform). Qualifications: 3+ years of backend development experience, with a strong focus on Kotlin Proven ability to design and maintain robust, API-centric microservices. Hands-on experience with Kubernetes-based deployments, cloud-agnostic infrastructure, and modern CI/CD workflows. Solid knowledge of PostgreSQL, Elasticsearch, and object storage systems. Strong understanding of distributed systems, data modeling, and software scalability principles. Excellent communication skills and ability to work in a cross-functional, English-speaking environment. Bachelor’s or Master’s degree in Computer Science or related discipline. Bonus Qualifications: Experience with Python for auxiliary services, data processing, or SDK usage. Knowledge of data contextualization or entity resolution techniques. Familiarity with 3D data models, industrial data structures, or hierarchical asset relationships. Exposure to LLM-based matching or AI-enhanced data processing (not required but a plus). Experience with Terraform, Prometheus, and scalable backend performance testing. Job responsibilities About The Role And Key Responsibilities Develop Data Fusion – a robust, state-of-the-art SaaS for industrial data. Solve concrete industrial data problems by designing and implementing delightful APIs and robust services on top of Data Fusion. Examples include integrating data sources into our platform in a secure and scalable way and enabling high-performance data science pipelines. Work with application teams to ensure a delightful user experience that helps the user solve complex real-world problems that have yet to be solved before. Work with distributed open-source software such as Kubernetes, Kafka, Spark and similar to build scalable and performant solutions. Work with databases or storage systems such as PostgreSQL, Elasticsearch or S3-API-compatible blob stores. Help shape the culture and methodology of a rapidly growing company. What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.
Posted 3 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
As a Fullstack SDE - II at NxtWave, you Build applications at a scale and see them released quickly to the NxtWave learners (within weeks )Get to take ownership of the features you build and work closely with the product tea mWork in a great culture that continuously empowers you to grow in your caree rEnjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster )NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidl yBuild in a world-class developer environment by applying clean coding principles, code architecture, etc .Responsibilitie sLead design and delivery of complex end-to-end features across frontend, backend, and data layers .Make strategic architectural decisions on frameworks, datastores, and performance patterns .Review and approve pull requests, enforcing clean-code guidelines, SOLID principles, and design patterns .Build and maintain shared UI component libraries and backend service frameworks for team reuse .Identify and eliminate performance bottlenecks in both browser rendering and server throughput .Instrument services with metrics and logging, driving SLIs, SLAs, and observability .Define and enforce comprehensive testing strategies: unit, integration, and end-to-end .Own CI/CD pipelines, automating builds, deployments, and rollback procedures .Ensure OWASP Top-10 mitigations, WCAG accessibility, and SEO best practices .Partner with Product, UX, and Ops to translate business objectives into technical roadmaps .Facilitate sprint planning, estimation, and retrospectives for predictable deliveries .Mentor and guide SDE-1s and interns; participate in hiring .Qualifications & Skill s3–5 years building production Full stack applications end-to-end with measurable impact .Proven leadership in Agile/Scrum environments with a passion for continuous learning .Deep expertise in React (or Angular/Vue) with TypeScript and modern CSS methodologies .Proficient in Node.js (Express/NestJS) or Python (Django/Flask/FastAPI) or Java (Spring Boot) .Expert in designing RESTful and GraphQL APIs and scalable database schemas .Knowledge of MySQL/PostgreSQL indexing, NoSQL (ElasticSearch/DynamoDB), and caching (Redis) .Knowledge of Containerization (Docker) and commonly used AWS services such as lambda, ec2, s3, api gateway etc .Skilled in unit/integration (Jest, pytest) and E2E testing (Cypress, Playwright) .Frontend profiling (Lighthouse) and backend tracing for performance tuning .Secure coding: OAuth2/JWT, XSS/CSRF protection, and familiarity with compliance regimes .Strong communicator able to convey technical trade-offs to non-technical stakeholders .Experience in reviewing pull requests and providing constructive feedback to the team .Qualities we'd love to find in you : The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality softwa reStrong collaboration abilities and a flexible & friendly approach to working with tea msStrong determination with a constant eye on solutio nsCreative ideas with problem solving mind-s etBe open to receiving objective criticism and improving upon itEagerness to learn and zeal to gr owStrong communication skills is a huge pl usWork Location : Hyderab ad About Nxt WaveNxtWave is one of India’s fastest-growing ed-tech startups, revolutionizing the 21st-century job market. NxtWave is transforming youth into highly skilled tech professionals through its CCBP 4.0 programs, regardless of their educational backgro und.NxtWave is founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur). Supported by Orios Ventures, Better Capital, and Marquee Angels, NxtWave raised $33 million in 2023 from Greater Pacific Capi tal.As an official partner for NSDC (under the Ministry of Skill Development & Entrepreneurship, Govt. of India) and recognized by NASSCOM, NxtWave has earned a reputation for excelle nce.Some of its prestigious recognitions incl ude:Technology Pioneer 2024 by the World Economic Forum, one of only 100 startups chosen glob ally‘Startup Spotlight Award of the Year’ by T-Hub in 2023‘Best Tech Skilling EdTech Startup of the Year 2022’ by Times Business Aw ards‘The Greatest Brand in Education’ in a research-based listing by URS M ediaNxtWave Founders Anupam Pedarla and Sashank Gujjula were honoured in the 2024 Forbes India 30 Under 30 for their contributions to tech educa tionNxtWave breaks learning barriers by offering vernacular content for better comprehension and retention. NxtWave now has paid subscribers from 650+ districts across India. Its learners are hired by over 2000+ companies including Amazon, Accenture, IBM, Bank of America, TCS, Deloitte and m ore. Know more about NxtW ave: https://www.cc bp.inRead more about us in the ne ws – Economic Times | CNBC | YourStory | VCC ircle
Posted 3 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
As a Fullstack SDE - II at NxtWave, you Build applications at a scale and see them released quickly to the NxtWave learners (within weeks) Get to take ownership of the features you build and work closely with the product team Work in a great culture that continuously empowers you to grow in your career Enjoy freedom to experiment & learn from mistakes (Fail Fast, Learn Faster) NxtWave is one of the fastest growing edtech startups. Get first-hand experience in scaling the features you build as the company grows rapidly Build in a world-class developer environment by applying clean coding principles, code architecture, etc. Responsibilities Lead design and delivery of complex end-to-end features across frontend, backend, and data layers. Make strategic architectural decisions on frameworks, datastores, and performance patterns. Review and approve pull requests, enforcing clean-code guidelines, SOLID principles, and design patterns. Build and maintain shared UI component libraries and backend service frameworks for team reuse. Identify and eliminate performance bottlenecks in both browser rendering and server throughput. Instrument services with metrics and logging, driving SLIs, SLAs, and observability. Define and enforce comprehensive testing strategies: unit, integration, and end-to-end. Own CI/CD pipelines, automating builds, deployments, and rollback procedures. Ensure OWASP Top-10 mitigations, WCAG accessibility, and SEO best practices. Partner with Product, UX, and Ops to translate business objectives into technical roadmaps. Facilitate sprint planning, estimation, and retrospectives for predictable deliveries. Mentor and guide SDE-1s and interns; participate in hiring. Qualifications & Skills 3–5 years building production Full stack applications end-to-end with measurable impact. Proven leadership in Agile/Scrum environments with a passion for continuous learning. Deep expertise in React (or Angular/Vue) with TypeScript and modern CSS methodologies. Proficient in Node.js (Express/NestJS) or Python (Django/Flask/FastAPI) or Java (Spring Boot). Expert in designing RESTful and GraphQL APIs and scalable database schemas. Knowledge of MySQL/PostgreSQL indexing, NoSQL (ElasticSearch/DynamoDB), and caching (Redis). Knowledge of Containerization (Docker) and commonly used AWS services such as lambda, ec2, s3, api gateway etc. Skilled in unit/integration (Jest, pytest) and E2E testing (Cypress, Playwright). Frontend profiling (Lighthouse) and backend tracing for performance tuning. Secure coding: OAuth2/JWT, XSS/CSRF protection, and familiarity with compliance regimes. Strong communicator able to convey technical trade-offs to non-technical stakeholders. Experience in reviewing pull requests and providing constructive feedback to the team. Qualities we'd love to find in you: The attitude to always strive for the best outcomes and an enthusiasm to deliver high quality software Strong collaboration abilities and a flexible & friendly approach to working with teams Strong determination with a constant eye on solutions Creative ideas with problem solving mind-set Be open to receiving objective criticism and improving upon it Eagerness to learn and zeal to grow Strong communication skills is a huge plus Work Location: Hyderabad About NxtWave NxtWave is one of India’s fastest-growing ed-tech startups, revolutionizing the 21st-century job market. NxtWave is transforming youth into highly skilled tech professionals through its CCBP 4.0 programs, regardless of their educational background. NxtWave is founded by Rahul Attuluri (Ex Amazon, IIIT Hyderabad), Sashank Reddy (IIT Bombay) and Anupam Pedarla (IIT Kharagpur). Supported by Orios Ventures, Better Capital, and Marquee Angels, NxtWave raised $33 million in 2023 from Greater Pacific Capital. As an official partner for NSDC (under the Ministry of Skill Development & Entrepreneurship, Govt. of India) and recognized by NASSCOM, NxtWave has earned a reputation for excellence. Some of its prestigious recognitions include: Technology Pioneer 2024 by the World Economic Forum, one of only 100 startups chosen globally ‘Startup Spotlight Award of the Year’ by T-Hub in 2023 ‘Best Tech Skilling EdTech Startup of the Year 2022’ by Times Business Awards ‘The Greatest Brand in Education’ in a research-based listing by URS Media NxtWave Founders Anupam Pedarla and Sashank Gujjula were honoured in the 2024 Forbes India 30 Under 30 for their contributions to tech education NxtWave breaks learning barriers by offering vernacular content for better comprehension and retention. NxtWave now has paid subscribers from 650+ districts across India. Its learners are hired by over 2000+ companies including Amazon, Accenture, IBM, Bank of America, TCS, Deloitte and more. Know more about NxtWave: https://www.ccbp.in Read more about us in the news – Economic Times | CNBC | YourStory | VCCircle
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Genpact (NYSE: G) is a global professional services and solutions firm dedicated to creating outcomes that shape the future. With a workforce of over 125,000 individuals spread across more than 30 countries, our team is distinguished by an innate curiosity, entrepreneurial agility, and a commitment to delivering enduring value to our clients. Fueled by our purpose of the relentless pursuit of a world that works better for people, we engage with and transform leading enterprises, including the Fortune Global 500, leveraging our profound business and industry expertise, digital operations services, and proficiency in data, technology, and AI. We are currently seeking applications for the position of Lead Consultant, Python Developer. As a Lead Consultant, your primary responsibility will be to deliver Enhancement & Development services within a back-end environment. You will play a crucial role in designing, testing, and maintaining UI applications while collaborating closely with cross-functional teams to provide robust software solutions. **Responsibilities:** - Design and execute large-scale backend infrastructure and APIs. - Develop high-quality code that is resilient, easily readable, and scalable. - Demonstrate a strong commitment to delving deep into challenges, thriving, and making progress even in ambiguous situations. - Foster and facilitate knowledge sharing within the team and external groups. - Operate within an agile environment that prioritizes the most critical deliverables for our clients. - Hands-on experience in Python, NoSQL databases such as MongoDB or ElasticSearch, caching technologies like Redis or Memcached, and streaming technologies like Kafka or RabbitMQ. - Hold a Bachelor's Degree in Computer Science or a related field with more than 3 years of work experience, or a Master's Degree with over 3 years of experience in Software Development. - Possess solid Computer Science fundamentals in Data Structures, Algorithms, Complexity Analysis, Object-Oriented Design, and the design of Large Scale Data-Intensive Applications. - Excellent analytical and communication skills, including the ability to communicate effectively with both technical and business audiences and collaborate on a global scale. **Qualifications we seek in you!** **Minimum Qualifications/Skills:** - BE/B Tech/MCA/MBA degree. - Exceptional written and verbal communication abilities. - Strong problem-solving skills. **Preferred Qualifications/Skills:** - Proficiency in Java, Django, Tornado, or Flask frameworks. - Experience with ELK Stack (Elasticsearch, Logstash, Kibana) or Prometheus + Grafana. - Knowledge of Linux, Bash, JSON, and SQL. - Familiarity with Credit products such as corporate bonds/loans, credit default swaps, total return swaps is a plus. - Experience working with cloud computing systems. - Proficiency in networking, including TCP, HTTP, DNS, SSL certificates. - Understanding of Slang/SecDB. **Job Details:** - Job Title: Consultant - Primary Location: India-Bangalore - Schedule: Full-time - Education Level: Bachelor's/Graduation/Equivalent - Job Posting: Sep 2, 2024, 12:40:31 AM - Unposting Date: Feb 28, 2025, 7:10:31 PM - Master Skills List: Consulting - Job Category: Full Time,
Posted 3 weeks ago
0 years
0 Lacs
New Delhi, Delhi, India
On-site
Job description About the Company myUpchar is India’s largest health-tech company with the vision to empower every individual with accessible and affordable healthcare through innovative technology, comprehensive medical information, teleconsultations and high-quality medicine, ensuring a healthier and more informed society. myUpchar is founded by alumni from Stanford University with rich experiences at Amazon, and BCG among other leading global firms. myUpchar has been funded by top VCs and Angel investors in India including Omidyar Network, Nexus VP, and Rajan Anandan. Currently, the platform receives around 50 million visits monthly and provides a positive health outcome to over 100K patients every month through our science-backed result-oriented treatment approach. We are looking for an experienced and motivated Software Developer with a strong focus on Ruby on Rails (ROR) to join our dynamic development team. The ideal candidate will have a solid understanding of building web applications using the Ruby on Rails framework and will be responsible for designing, developing, and maintaining scalable and high-performance web applications. The role requires expertise in front-end and back-end technologies, with a focus on Ruby on Rails, MySQL, Redis, Elasticsearch, and other modern technologies. Position : Software Engineer - Ruby on Rails (ROR) Location : Okhla Phase 3, South Delhi, Delhi 110020 Experience : 1yrs-3yrs Employment Type : Full Time Mandatory skills: Ruby on Rails AGILE Methodology jQuery, AJAX, MYSQL 5.x, Apache Experience in Git, SVN and Deployment Rails specific server administration Should have good knowledge of latest trends and developments in Ruby and Rails community Roles & Responsibilities: Independently execute the project (requirements gathering, analysis, technical design, development, testing, deployment) Coordinate with the stakeholders for the on-going process of technical discussions, status updates etc. Provide technical assistance and implementation for interfacing with 3rd party APIs Evaluate and add open source components based on project needs Mentor and assist other developers Should have developed at least one complex application using these tech stacks. Nice to have skills: NoSQL databases like MongoDB Have worked on ElasticSearch Contribution to the community in the form of gems, plugins or technical articles/publications. Desired Candidate Profile: Experience: 1yrs-3yrs Good Communication and Interpersonal skill Qualifications: UG: B.Tech PG: M.Tech
Posted 3 weeks ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
As a Technical Product Manager (TPM) for our internal Observability & Insights Platform, you will be responsible for defining the product strategy, owning discovery and delivery, and ensuring our engineers and stakeholders across 350+ services can build, debug, and operate confidently. You will own and evolve a platform that includes logging (ELK stack), metrics (Prometheus, Grafana, Thanos), tracing (Jaeger), structured audit logs, and SIEM integrations, while competing with high-cost solutions like Datadog and Honeycomb. Your impact will be both technical and strategic, improving developer experience, reducing operational noise, and driving platform efficiency and cost visibility. 🎯 Key Deliverables (Quarterly Outcomes): •Successfully manage and deliver initiatives from the Observability Roadmap / Job Jar, tracked via RAG status and Jira epics. •Complete structured discoveries for upcoming capabilities (e.g., SIEM exporter, SDK adoption, trace sampling). •Design and roll out scorecards (in Port) to measure observability maturity across teams. •Ensure feature parity and stakeholder migration in cost-saving initiatives (e.g., Datadog → Prometheus). •Track and report platform usage, reliability, and cost metrics aligned to business outcomes. •Drive feature documentation, adoption plans, and enablement sessions across engineering. 🔧 Jobs to Be Done: •Define and evolve the observability product roadmap (Logs, Metrics, Traces, SDK, Dashboards, SIEM). •Lead dual-track agile product discovery for upcoming initiatives — gather context, define problem, validate feasibility. •Partner with engineering managers to break down initiatives into quarterly deliverables, epics, and sprint-level execution. •Maintain the Observability Job Jar and present RAG status every 2 weeks with confidence backed by Jira hygiene. •Define and track metrics to measure success of every platform capability (SLOs, cost savings, adoption %, etc). •Work closely with FinOps, Security, and Platform teams to ensure observability aligns with cost, compliance, and operational goals. •Champion the adoption of SDKs, scorecards, and dashboards via enablement, documentation, and evangelism. 🤝 Ways of Working: •Work in dual-track agile: Discover next quarter’s priorities while delivering this quarter’s committed outcomes. •Maintain a GPS PRD (Product Requirements Doc) for each major initiative: What problem are we solving? Why now? How do we measure value? •Collaborate deeply with engineers in backlog grooming, planning, demos, and retrospectives. •Follow RAG-based reporting with stakeholders: escalate risks early, present mitigation paths clearly. •Operate with full visibility in Jira (Initiative → Epics → Stories → Subtasks), driving delivery rhythm across sprints. •Use quarterly Job Jar reviews to recalibrate product priorities, staffing needs, and stakeholder alignment. ✅ You Should Have: •10+ years of product management experience, ideally in platform/infrastructure products. •Proven success managing internal developer platforms or observability tooling. •Experience launching or migrating enterprise-scale telemetry stacks (e.g., Datadog → Prometheus/Grafana, Honeycomb → Jaeger). •Ability to break down complex engineering requirements into structured product plans with measurable outcomes. •Strong technical grounding in cloud-native environments (EKS, Kafka, Elasticsearch, etc). •Excellent documentation and storytelling skills — especially to influence engineers and non-technical stakeholders. 📈 Success Metrics: •% of services adopting OTel SDK with structured logging •% reduction in Datadog/Honeycomb usage & cost post migration •Uptime & latency of observability pipelines (Jaeger, ELK, Prometheus) •Scorecard improvement across teams (Bronze → Silver → Gold) •Number of issues detected/resolved using the new observability stack •Time to incident triage with new tracing/logging capabilities
Posted 3 weeks ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Location : Ahmadabad, Gujarat Job Mode : Hybrid Experience : Minimum 4 years to 7 years as full stack Full Stack Developer · Frontend Skills: o Must Have: § Proficiency in: · Primary: React · Secondary: JavaScript · Exposure to: TypeScript · Familiarity with Material-UI (MUI core, DataGrid, DatePicker, etc ) · Knowledge of state management libraries (e.g., Redux, React-Redux) · Form state management systems like (Form-Redux) · Understanding of Git and version control systems · Eslint/Prettier · Understanding Microfrontends · Experience with routing, i18n libraries · REST architecture · Promises/Async-Await · Module bundlers (Webpack) o Nice-to-Have: · Experience with routing, i18n libraries · Knowledge of CSS-in-JS solutions · Experience with responsive design · Familiarity with frontend testing frameworks (e.g., Vitest, React Testing Library) · Backend Skills: o Must Have: · Proficiency in Java · Experience with build automation and plugin management tools (ex. Maven, Gradle) · Understanding of Git and version control systems · Experience with Spring Boot or Quarkus frameworks · Experience with Persistence frameworks (ex. Hibernate) · Knowledge of Postgres SQL and database management (ex. Liquibase) · Understanding of RESTful API design and development · Experience with microservices architecture o Nice-to-Have: · Familiarity with Docker and containerization · Knowledge of cloud platforms (e.g., Azure, Kubernetes) · Understanding of CI/CD pipelines · Experience with backend testing frameworks (e.g., JUnit, Mockito) · Understanding testing concepts (ex. Systemtest, Integrationtest, Loadtest) · Experience with messaging systems (ex. RabbitMQ, Kafka, AzureServiceBus) · Other Skills: Monitoring o Experience with monitoring tools and technologies (ex. Prometheus, Grafana, Kibana, Elasticsearch, Jaeger)
Posted 3 weeks ago
2.0 - 9.0 years
0 Lacs
karnataka
On-site
Mandatory Qualification 5 to 9 years of experience in developing .net Core web applications and tools. Should have Hands-on experience or strong programming knowledge in all .Net frameworks, .Net Core, latest MVC, and Web API frameworks. Strong programming knowledge in C#, SQL Experience in developing in a test-driven development environment (plus). Exposure and experience in Entity Framework Core, API Gateway, LINQ to SQL/LINQ to Entities, and XML. Capability with JIRA and GIT. Write clear codes and prepare coding documentation Ability to work on CI CD - Dev ops model. Proficiency working within an Agile team and Agile process. B.E / B.Tech / M.E / M.Tech / MCA Min 2 years of experience with ELK Have Skills : .NetCore Elasticsearch Optional Qualification Azure knowledge Dockers and Kubernetes (ref:hirist.tech),
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
You will be an essential part of a dynamic team that develops web services and APIs using Agile methodology. Your responsibilities will include writing reusable, testable, and efficient code, as well as ensuring code quality, organization, and automation. By using your skills and creativity, you will proactively identify and address defects to prevent issues from arising. Collaboration with team members to define, design, and implement new features will be a key aspect of your role. It will be your responsibility to maintain high performance, quality, and responsiveness in web services. Additionally, you will continuously explore, assess, and integrate new technologies to enhance development efficiency. To be successful in this role, you must have at least 2 years of prior experience as a Node.js developer. Proficiency in Node.js using express/koa or Restify, along with solid knowledge of JavaScript, is required. You should also have practical experience working with MongoDB and Redis, and be adept at creating secure RESTful web services and APIs. A strong understanding of Data Structures & Algorithms, as well as experience with System Design & Architecture, will be beneficial. Familiarity with integrating logging and monitoring systems, using Git for source version control, and possessing excellent problem-solving and debugging skills are essential. Furthermore, a good grasp of Microservice Architecture is crucial for this role. Desirable qualifications include experience in setting up CI/CD pipelines using Jenkins, working with cloud technologies such as AWS, and familiarity with ElasticSearch/Solr or AWS OpenSearch. Experience in Consumer Web Development for High-Traffic, Public Facing web applications, and the ability to provide technical insight for new initiatives across different disciplines will be advantageous. Additionally, experience with relational databases, SQL knowledge, and a total of 3 to 5 years of relevant experience are preferred for this position.,
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role - Python Developer Experience - 4 - 7 yrs Location - Bangalore Minimum Qualifications Proficiency in backend software development using Python, Golang, or similar object-oriented languages. Strong understanding of RESTful APIs, service-oriented architecture (SOA), and microservice architectures. Hands-on experience with MySQL databases (schema design and query optimization). Experience deploying and managing services on AWS/GCP platforms. Familiarity with containerization tools like Docker and CI/CD pipelines. Skilled in performance optimization and debugging within distributed systems. Proven ability to lead technical projects and mentor junior team members effectively. Preferred Qualifications Experience implementing design patterns and writing modular, maintainable code. Exposure to AI/ML pipelines and Elasticsearch integration. Hands-on experience with Postgres, MongoDB, DynamoDB, or Elasticsearch within distributed ecosystems. Knowledge of web servers and distributed system architectures. Experience coordinating frontend-backend workflows during feature development. Familiarity with data pipeline technologies or AI-powered search workflows. Proficiency with Git/GitHub workflows, code review processes, and CI/CD pipelines.
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
You should be strong in data structures and algorithms, and have experience working on a large scale consumer product. It is essential that you have worked on distributed and microservice architecture, and possess a solid understanding of scale, performance, and memory optimization fundamentals. Requirements And Skills - You should hold a BS/MS/BTech/MTech degree in Computer Science, Engineering, or a related field. - You must have a minimum of 4-8 years of experience in Java/J2EE Technologies. - Experience in designing open APIs and implementing oAuth2 is required. - Proficiency in Kafka, JMS, RabbitMQ, and AWS Elastic Queue is a must. - You should have hands-on experience with Spring, Hibernate, Tomcat, Jetty, and Undertow in a production environment. - Familiarity with Junit, Mockito for unit test cases, and MySQL or any other RDBMS is necessary. - Proven experience in software development and Java development is essential. - Hands-on experience in designing and developing applications using Java EE platforms is required. - Knowledge of Object-Oriented analysis and design using common design patterns is expected. Preferred - Experience in handling high traffic applications is a plus. - Familiarity with MongoDB, Redis, CouchDB, DynamoDB, and Riak is preferred. - Experience in Asynchronous Programming (Actor model concurrency, RxJava, Executor Framework) is a bonus. - Knowledge of Lucene, ElasticSearch, Solr, Jenkins, and Docker is advantageous. - Experience in other languages/technologies like Scala, NodeJs, PHP is a plus. - Experience in AWS, Google, Azure Cloud for managing, monitoring, and hosting servers is a bonus. - Experience in handling Big Data and knowledge of WebSocket and backend server for WebSocket is preferred.,
Posted 3 weeks ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About the Company We are looking for an experienced Python Developer with expertise in using TensorFlow/PyTorch, LangChain, OpenAI API, Elasticsearch and a deep understanding of Natural Language Processing to help us develop and optimize high-performance applications. About the Role You will be responsible for implementing, testing, and maintaining data pipelines, machine learning models, and NLP techniques to extract valuable insights from data. Responsibilities Design, develop, and maintain Python-based data analysis and machine learning applications with clean and well-documented code Develop, optimize and deploy ML models for information retrieval, LLM-based agents, embeddings (FAISS, Pinecone, Weaviate), predictive analytics, and Retrieval-Augmented Generation (RAG) Research and implement NLP algorithms for text classification, sentiment analysis, named entity recognition (NER), and topic modeling, including troubleshooting and debugging to ensure reliable performance at scale Implement data pipelines and ETL processes for big data processing Collaborate with cross-functional teams to understand business requirements and build scalable tech Qualifications Strong proficiency in Python with hands-on experience in libraries like Pandas, NumPy, scikit-learn, TensorFlow, PyTorch Expertise in information retrieval, statistical analysis, data visualization and developing LLM-based agents, embeddings (FAISS, Pinecone, Weaviate), predictive analytics, and Retrieval-Augmented Generation (RAG) Hands-on experience with Natural Language Processing (NLP) libraries such as NLTK, spaCy, Hugging Face, or similar tools Experience with data wrangling techniques, including cleaning, transforming, and merging data sets from various sources Familiarity with machine learning algorithms and frameworks (supervised, unsupervised learning, and deep learning techniques) Solid understanding of text analytics such as text pre-processing, tokenization, stemming, lemmatization, and part-of-speech tagging Experience with cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes) is a plus. Knowledge of data visualization tools (Matplotlib, Seaborn, ggplot2, Plotly, etc.) Strong problem-solving skills and attention to detail with ability to work in an agile, fast-paced environment and deliver results under tight deadlines Required Skills 4 year Bachelor’s degree in Computer Science, Information Technology, Data Science, Statistics or related domains, or equivalent qualification 4+ years in developing scalable ML models, NLP models and systems from 0 to 1 and deploying them to production Strong knowledge of RESTful APIs and GraphQL for frontend-backend communication Familiarity with version control using Git, CI/CD tools, and deployment pipelines Knowledge of big data tools and platforms (Spark, Hadoop, etc.) and experience with managing databases
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As a Lead Backend Engineer at our dynamic tech company, you will play a pivotal role in shaping the backend architecture and leading a talented team of 5-6 engineers. Your expertise in a diverse tech stack and your leadership skills will be essential in driving our backend projects to success. You will guide and mentor a team of 5-6 backend engineers, ensuring the team delivers high-quality code, adheres to best practices, and meets project deadlines. Your hands-on development using TypeScript, Node.js, and Nestjs will be instrumental in building robust and scalable backend systems. Proficiency in managing databases like PostgreSQL and MongoDB will help you implement efficient data storage and retrieval strategies. Additionally, your expertise in Elasticsearch, Neptune, and Gremlin will enable you to handle complex data structures and relationships effectively. In this role, you will conduct code reviews, enforce coding standards, and maintain high-quality software. Collaboration with frontend teams, designers, and product managers will be crucial to ensure seamless integration and alignment with business goals. Furthermore, you will be responsible for planning, tracking, and reporting on project progress while effectively managing resources to meet deadlines. To excel in this position, you should possess a Bachelor's degree in Computer Science or a related field, along with a minimum of 5 years of experience in backend development. Strong proficiency in TypeScript, Node.js, Next.js, PostgreSQL, MongoDB, Elasticsearch, Neptune, and Gremlin is required. Your proven experience in leading a team of engineers, excellent problem-solving skills, attention to detail, and strong communication and collaboration skills will be essential. Experience in a fast-paced, agile environment and prior experience in a similar lead role are preferred. Contributions to open-source projects or a strong GitHub portfolio will be considered a plus.,
Posted 3 weeks ago
0.0 - 6.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Chennai,Tamil Nadu,India Job ID 767622 Join our Team About this opportunity We are looking for a developer to build, maintain and evolve our test automation frameworks and test tools for Telecom BSS application. This role is positioned in Chennai and is part of a distributed team that works in close collaboration with colleagues from Chennai India and Karlskrona Sweden. The team adopts Agile ways of working in which continuous improvement and innovation are practiced in daily work. The BSS Domain is undergoing a technology transformation to cloud native platform and is adopting DevOps principles for its operations giving this team the opportunity to be part of this evolution as new technologies and tools are to be explored and mastered. If you are loaded with curiosity and love exploring and learning new technologies, then this might be a great opportunity for you! What you will do Be a part of the BSS Charging Cross functional Team, Developing, Automating, and executing non-functional test cases that includes deployment, stability, robustness, dimensioning, and secure continuous verification of the application. Ensure that test configurations are continuously evolved, and that new developed hardware are continuously added when needed to the node pool. Operate and monitor pipelines for continuous delivery of microservices and SW packages. Perform Effective Trouble shooting to Isolate, Verify , Validate faults and support customer issues. Ensure that test effectiveness is continuously evaluated to know that the automated scope is effective. Actively share knowledge as an integrated part of the daily work, to develop competence. Actively re-use knowledge as a coordinated part of the daily work. Identify improvements in products and processes and initiate and drive improvements. Continuously develop Competence in automation domain. You will bring Solid Programming skills in Java, Python/Shell script. Strong Experience with Linux systems Experience in VMs and Containers using tools such as Docker, Kubernetes, Helm and VMware Experience in tools like Git, Gerrit, Maven, Spinnaker, Jenkins etc. Experience in configuring and generating call traffic flows. Good knowledge of troubleshooting and test automation technique/tools together with creative problem solving skills. Experience about dashboarding and reporting tools such as Grafana, Kibana and Jasper reports etc Preferable knowledge about Search & Analytics Engines such as Elasticsearch and visualization aids such as Kibana. Ability to feel comfortable with frequent, incremental code testing and deployment. Open communication reaching beyond functional borders to seek knowledge. Key Qualifications: Minimum years of proven experience: 5 - 10 years of overall experience in delivering software backlogs minimum of 6 years of experience in telecom carrier grade product. Education: B. Tech in CS/EC/EE or related studies. M. Tech is a plus. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France