Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: • Overall 10+ years of experience working within a large enterprise consisting of large and diverse teams. • Minimum of 6 years of experience on APM and Monitoring technologies • Minimum of 3 years of experience on ELK • Design and implement efficient log shipping and data ingestion processes. • Collaborate with development and operations teams to enhance logging capabilities. • Implement and configure components of the Elastic Stack, including, File beat, Metrics beat, Winlog beat, Logstash and Kibana. • Create and maintain comprehensive documentation for Elastic Stack configurations and processes. • Ensure seamless integration between various Elastic Stack components. • Advance Kibana dashboards and visualizations modelling, deployment. • Create and manage Elasticsearch Clusters on premise, including configuration parameters, indexing, search, and query performance tuning, RBAC security governance, and administration. • Hands-on Scripting & Programming in Python, Ansible, bash, data parsing (regex), etc. • Experience with Security Hardening & Vulnerability/Compliance, OS patching, SSL/SSO/LDAP. • Understanding of HA design, cross-site replication, local and global load balancers, etc. • Data ingestion & enrichment from various sources, webhooks, and REST APIs with JSON/YAML/XML payloads & testing POSTMAN, etc. • CI/CD - Deployment pipeline experience (Ansible, GIT). • Strong knowledge of performance monitoring, metrics, planning, and management. • Ability to apply a systematic & creative approach to solve problems, out-of-the-box thinking with a sense of ownership and focus. • Experience with application onboarding - capturing requirements, understanding data sources, architecture diagrams, application relationships, etc. • Influencing other teams and engineering groups in adopting logging best practices. • Effective communication skills with the ability to articulate technical details to a different audience. • Familiarity with Service now, Confluence and JIRA. • Understand SRE and DevOps principles Technical Skills: APM Tools – ELK, AppDynamics, PagerDuty Programming Languages - Java / .Net, Python Operating Systems – Linux and Windows Automation – GitLab, Ansible Container Orchestration – Kubernetes Cloud – Microsoft Azure and AWS Interested candidates please share your resume with balaji.kumar@flyerssoft.com
Posted 1 week ago
6.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
We are looking for a highly capable Senior Full Stack engineer to be a core contributor in developing our suite of product offerings. If you love working on complex problems, and writing clean code, you will love this role. Our goal is to solve a messy problem elegantly and cost effectively. Our job is to collect, categorize, and analyze semi-structured data from different sources (20 million+ products from 500+ websites into our catalog of 500 million+ products). We help our customers discover new patterns in their data that can be leveraged so that they can become more competitive and increase their revenue. Essential Functions: Think like our customers – you will work with product and engineering leaders to define intuitive solutions Designing customer-facing UI and back-end services for various business processes. Developing high-performance applications by writing testable, reusable, and efficient code. Implementing effective security protocols, data protection measures, and storage solutions. Improve the quality of our solutions – you will hold yourself and your team members accountable to writing high quality, well-designed, maintainable software Own your work – you will take responsibility to shepherd your projects from idea through delivery into production Bring new ideas to the table – some of our best innovations originate within the team Guiding and mentoring others on the team Technologies We Use: Languages: NodeJS/NestJS/Typescript, SQL, React/Redux, GraphQL Infrastructure: AWS, Docker, Kubernetes, Terraform, GitHub Actions, ArgoCD Databases: Postgres, MongoDB, Redis, Elasticsearch, Trino, Iceberg Streaming and Queuing: Kafka, NATS, Keda Qualifications 6+ years of professional software engineering/development experience. Proficiency with architecting and delivering solutions within a distributed software platform Full stack engineering experience, including front end frameworks (React/Typescript, Redux) and backend technologies such as NodeJS/NestJS/Typescript, GraphQL Proven ability to learn quickly, make pragmatic decisions, and adapt to changing business needs Proven ability to work and effectively, prioritize and organize your work in a highly dynamic environment Proven track record of working in highly distributed event driven systems. Strong proficiency working of RDMS/NoSQL/Big Data solutions (Postgres, MongoDB, Trino, etc.) Solid understanding of Data Pipeline and Workflow Automation – orchestration tools, scheduling and monitoring Solid understanding of ETL/ELT and OLTP/OLAP concepts Solid understanding of Data Lakes, Data Warehouses, and modeling practices (Data Vault, etc.) and experience leveraging data lake solutions (e.g. AWS Glue, DBT, Trino, Iceberg, etc.) . Ability to clean, transform, and aggregate data using SQL or scripting languages Ability to design and estimate tasks, coordinate work with other team members during iteration planning Solid understanding of AWS, Linux and infrastructure concepts Track record of lifting and challenging teammates to higher levels of achievement Experience measuring, driving and improving the software engineering process Good testing habits and strong eye for quality. Outstanding organizational, communication, and relationship building skills conducive to driving consensus; able to work well in a cross-functional environment Experience working in an agile team environment Ownership – feel a sense of personal accountability/responsibility to drive execution from start to finish. Drive adoption of Wiser's Product Delivery organization principles across the department. Bonus Points Experience with CQRS Experience with Domain Driven Design Experience with C4 modeling Experience working within a retail or ecommerce environment Experience with AI Coding Agents (Windsurf, Cursor, Claude, ChatGPT, etc) – Prompt Engineering Why Join Wiser Solutions? Work on an industry-leading product trusted by top retailers and brands. Be at the forefront of pricing intelligence and data-driven decision-making. A collaborative, fast-paced environment where your impact is tangible. Competitive compensation, benefits, and career growth opportunities. Additional Information EEO STATEMENT Wiser Solutions, Inc. is an Equal Opportunity Employer and prohibits Discrimination, Harassment, and Retaliation of any kind. Wiser Solutions, Inc. is committed to the principle of equal employment opportunity for all employees and applicants, providing a work environment free of discrimination, harassment, and retaliation. All employment decisions at Wiser Solutions, Inc. are based on business needs, job requirements, and individual qualifications, without regard to race, color, religion, sex, national origin, family or parental status, disability, genetics, age, sexual orientation, veteran status, or any other status protected by the state, federal, or local law. Wiser Solutions, Inc. will not tolerate discrimination, harassment, or retaliation based on any of these characteristics
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Company: The healthcare industry is the next great frontier of opportunity for software development, and Health Catalyst is one of the most dynamic and influential companies in this space. We are working on solving national-level healthcare problems, and this is your chance to improve the lives of millions of people, including your family and friends. Health Catalyst is a fast-growing company that values smart, hardworking, and humble individuals. Each product team is a small, mission-critical team focused on developing innovative tools to support Catalyst’s mission to improve healthcare performance, cost, and quality. POSITION OVERVIEW: We are looking for a highly skilled Senior Database Engineer & Storage Expert with 5+ years of hands-on experience in managing and optimizing large-scale, high-throughput database systems. The ideal candidate will possess deep expertise in handling complex ingestion pipelines across multiple data stores and a strong understanding of distributed database architecture. The candidate will play a critical technical leadership role in ensuring our data systems are robust, performant, and scalable to support massive datasets ingested from various sources without bottlenecks. You will work closely with data engineers, platform engineers, and infrastructure teams to continuously improve database performance and reliability. performance bottlenecks. KEY RESPONSIBILITIES: • Query Optimization: Design, write, debug and optimize complex queries for RDS (MySQL/PostgreSQL), MongoDB, Elasticsearch, and Cassandra. • Large-Scale Ingestion: Configure databases to handle high-throughput data ingestion efficiently. • Database Tuning: Optimize database configurations (e.g., memory allocation, connection pooling, indexing) to support large-scale operations. • Schema and Index Design: Develop schemas and indexes to ensure efficient storage and retrieval of large datasets. • Monitoring and Troubleshooting: Analyze and resolve issues such as slow ingestion rates, replication delays, and performance bottlenecks. • Performance Debugging: Analyze and troubleshoot database slowdowns by investigating query execution plans, logs, and metrics. • Log Analysis: Use database logs to diagnose and resolve issues related to query performance, replication, and ingestion bottlenecks • Data Partitioning and Sharding: Implement partitioning, sharding, and other distributed database techniques to improve scalability. • Batch and Real-Time Processing: Optimize ingestion pipelines for both batch and real-time workloads. • Collaboration: Partner with data engineers and Kafka experts to design and maintain robust ingestion pipelines. • Stay Updated: Stay up to date with the latest advancements in database technologies and recommend improvements. REQUIRED SKILLS AND QUALIFICATIONS: • Database Expertise: Proven experience with MySQL/PostgreSQL (RDS), MongoDB, Elasticsearch, and Cassandra. • High-Volume Operations: Proven experience in configuring and managing databases for large-scale data ingestions. • Performance Tuning: Hands-on experience with query optimization, indexing strategies, and execution plan analysis for large datasets. • Database Internals: Strong understanding of replication, partitioning, sharding, and caching mechanisms. • Data Modeling: Ability to design schemas and data models tailored for high throughput use cases. • Programming Skills: Proficiency in at least one programming language (e.g., Python, Java, Go) for building data pipelines. • Debugging Proficiency: Strong ability to debug slowdowns by analyzing database logs, query execution plans, and system metrics. • Log Analysis Tools: Familiarity with database log formats and tools for parsing and analyzing logs. • Monitoring Tools: Experience with monitoring tools such as AWS CloudWatch, Prometheus, and Grafana to track ingestion performance. • Problem-Solving: Analytical skills to diagnose and resolve ingestion-related issues effectively. PREFERRED QUALIFICATIONS: • Certification in any of the mentioned database technologies. • Hands-on experience with cloud platforms such as AWS (preferred), Azure, or GCP. • Knowledge of distributed systems and large-scale data processing. • Familiarity with cloud-based database solutions and infrastructure. • Familiarity with large scale data ingestion tools like Kafka, Spark or Flink. EDUCATIONAL REQUIREMENTS: • Bachelor’s degree in computer science, Information Technology, or a related field. Equivalent work experience will also be considered Equal Employment Opportunity has been, and will continue to be, a fundamental principle at Health Catalyst, where employment is based upon personal capabilities and qualification without discrimination or harassment on the basis of race, color, national origin, religion, sex, sexual orientation, gender identity, age, disability, citizenship status, marital status, creed, genetic predisposition or carrier status, sexual orientation or any other characteristic protected by law.. Health Catalyst is committed to a work environment where all individuals are treated with respect and dignity.
Posted 1 week ago
6.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Location: Kolkata Experience Required: 6 to 8+ Employment Type: Full-time CTC: 8 to 12 LPA About Company: At Gintaa, were redefining how Indians order food. With our focus on affordability, exclusive restaurant partnerships, and hyperlocal logistics, we aim to scale across India's Tier 1 and Tier 2 cities. Were backed by a mission-driven team and expanding rapidly – now’s the time to join the core tech leadership and build something impactful from the ground up. Job Description : We are seeking a talented and experienced Mid-Senior Level Software Engineer (Backend) to join our dynamic team. The ideal candidate will have strong expertise in backend technologies, microservices architecture, and cloud environments. You will be responsible for designing, developing, and maintaining high-performance backend systems to support scalable applications. Responsibilities: Design, develop, and maintain robust, scalable, and secure backend services and APIs. Work extensively with Java, Spring Boot, Spring MVC, Hibernate to build and optimize backend applications. Develop and manage microservices-based architectures. Implement and optimize RDBMS (MySQL, PostgreSQL) and NoSQL (MongoDB, Cassandra, etc.) solutions. Build and maintain RESTful services for seamless integration with frontend and third-party applications. Basic understanding of Node.js and Python is a bonus. Ability to learn and work with new technologies. Optimize system performance, security, and scalability. Deploy and manage applications in cloud environments (AWS, GCP, or Azure). Collaborate with cross-functional teams including frontend engineers, DevOps, and product teams. Convert business requirements into technical development items using critical thinking and analysis. Lead a team and manage activities, including task distribution. Write clean, maintainable, and efficient code following best practices. Participate in code reviews, technical discussions, and contribute to architectural decisions. Required Skills: 6+ years of experience in backend development with Java and Spring framework (Spring Boot, Spring MVC). Strong knowledge of Hibernate (ORM) and database design principles. Hands-on experience with Microservices architecture and RESTful API development. Proficiency in RDBMS (MySQL, PostgreSQL) and NoSQL databases (MongoDB, Cassandra, etc.). Experience with cloud platforms such as AWS, GCP, or Azure. Experience with Kafka or equivalent tool for messaging and stream processing. Basic knowledge of Node.js for backend services and APIs. Proven track record of working in fast-paced, Agile/Scrum methodology. Proficient with Git. Familiarity with IDE tools such as Intellij and VS Code. Strong problem-solving and debugging skills. Understanding of system security, authentication and authorization best practices. Excellent communication and collaboration skills. Preferred Skills (Nice to Have): Experience with Elasticsearch for search and analytics. Familiarity with Firebase tools for real-time database, firestore, authentication, and notifications. Hands-on experience with Google Cloud Platform (GCP) services. Hands-on experience of working with Node.js and Python. Exposure to containerization and orchestration tools like Docker and Kubernetes. Experience in CI/CD pipelines and basic DevOps practices.
Posted 1 week ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are seeking a skilled and security-focused DevOps Engineer / Infrastructure Engineer to design, implement, and maintain a production-grade, on-prem-ready infrastructure stack. This role demands expertise in open-source technologies, containerization, orchestration, and monitoring tools to support regulated industry-grade deployments (e.g., finance, healthcare, pharma). Key Responsibilities: Design and implement a containerized stack using Docker and Podman (focus on rootless & daemonless security architecture). Deploy and manage Kubernetes clusters ( K8s , OpenShift optional) for scalable microservices orchestration. Set up and maintain CI/CD pipelines using Jenkins or GitLab CI/CD . Configure persistent and high-availability storage solutions using NFS , HDFS , or Ceph (with Rook integration). Establish centralized logging and monitoring with Prometheus + Grafana , ELK Stack (ElasticSearch, Logstash, Kibana), or Graylog . Integrate LDAP / Active Directory / SSO for secure user authentication and role-based access control (RBAC). Deploy and configure NGINX or Traefik as a reverse proxy for secure traffic routing and SSL termination. Work closely with security and compliance teams to ensure infrastructure is audit-ready and adheres to industry regulations. Required Skills & Qualifications: Strong hands-on 5+ years experience with Docker , Podman , and Kubernetes. Familiarity with OpenShift (nice to have, especially in regulated industries). Experience configuring and managing CI/CD tools like Jenkins, GitLab. Solid understanding of distributed file systems like HDFS , Ceph , and networked storage like NFS . Experience with Prometheus , Grafana , Kibana , ELK , or Graylog for infrastructure observability. Understanding of enterprise security integration: LDAP , Active Directory , SSO . Proficiency in setting up and managing NGINX and/or Traefik . Strong scripting skills (Bash, Python, or Go) for automation and configuration. Exposure to infrastructure in air-gapped or isolated environments is a plus. Preferred Industry Experience: Telecom, Banking, Financial Services, Retail Government or Public Sector IT What We Offer: Competitive salary & performance bonus Opportunity to work on mission-critical systems in a secure, self-hosted environment Collaborative team and supportive leadership Exposure to enterprise-grade open-source deployments
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
🚀 SDE II – Software Development Engineer Location: Hyderabad (On-site) Experience: 3–5 Years Company: NxtWave 📢 We’re Hiring! Are you passionate about building scalable fullstack applications that directly impact thousands of learners? At NxtWave , we’re on a mission to transform education and empower the next generation of tech talent — and we’re growing fast! We're looking for an experienced Software Development Engineer SDE II who thrives in a fast-paced, agile environment and is eager to build world-class products from the ground up. Responsibilities Lead the design and implementation of complex fullstack features (frontend, backend, and data layers) Make key architectural decisions for frameworks, data stores, and performance optimization Review code, enforce clean code practices and design patterns Build and maintain reusable component libraries and backend service templates Identify and eliminate performance bottlenecks Own CI/CD pipelines for automated builds and deployments Define and implement comprehensive testing strategies (unit, integration, E2E) Ensure security (OWASP Top-10), accessibility (WCAG), and SEO best practices Collaborate with Product, UX, and Ops to translate business goals into technical deliverables Mentor junior engineers and actively contribute to hiring efforts ✅ Requirements 3–5 years of experience building fullstack applications with real-world impact Strong leadership in Agile/Scrum settings and hunger for continuous learning Experience with Node.js (Express/NestJS) or Python (Django/FastAPI) or Java (Spring Boot) Hands-on with MySQL/PostgreSQL, ElasticSearch/DynamoDB, Redis, etc. Familiarity with Docker, AWS (Lambda, EC2, S3, API Gateway, etc.) Skilled in testing frameworks like Jest, pytest, Cypress, or Playwright Performance tuning using tools like Lighthouse and backend tracing Secure coding: OAuth2/JWT, XSS/CSRF protection Strong communication and code review skills Bonus Traits We Love Solution-oriented with a drive to deliver high-quality software Collaborative and friendly team player Open to feedback and focused on growth Passionate about innovation and learning new tech 📩 Ready to Build the Future of EdTech If you’re eager to be part of something impactful and thrive on taking ownership, we’d love to hear from you! Apply now or connect with me directly to explore this exciting opportunity.
Posted 1 week ago
15.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Scope of the Role: The Principal Data Scientist will lead the architecture, development, and deployment of cutting-edge AI solutions, driving innovation in machine learning, deep learning, and generative AI. The role demands cutting edge expertise in advanced Gen AI development, Agentic AI development, optimization, and integration with enterprise-scale applications while fostering an experimental and forward-thinking environment. This senior level hands-on role offers an immediate opportunity to lead next-gen AI innovation, drive strategic AI initiatives, and shape the future of AI adoption at scale in large enterprise and industry applications. Reports To: Chief AI Officer Reportees: Individual Contributor Role Minimum Qualification: Bachelor’s or Master’s/PhD in Artificial Intelligence, Machine Learning, Data Science, Computer Science, or a related field. Advanced certifications in AI/ML frameworks, GenAI, Agentic AI, cloud AI platforms, or AI ethics preferred. Experience: A minimum of 15+ years of experience in AI/ML, with a proven track record in deploying AI-driven solutions – and deep expertise and specialty in Generative AI (use of proprietary and open source LLMs/ SLMs/ VLMs/ LCMs, experience in RAG, fine tuning and developing derived domain-specific models, multi agent frameworks and agentic architecture, LLMOps) Expertise in implementing the Data Solutions using Python programming, pandas, numpy, sklearn, pytorch, tensorflow, Data Visualizations, Machine Learning Algorithms, Deep Learning architectures, LLMs, SLM, VLM, LCM, generative AI, agents, prompt engineering, NLP, Transformer Architectures, GPTs, Computer Vision, and ML Ops for both unstructured and structured data, synthetic data generation Experience in building generic and customized Conversational AI Assistants Experience working in innovation-driven, research-intensive, or AI R&D-focused organizations. Experience in building AI solutions for Construction or Manufacturing, Oil & Gas industries would be good to have. Objective / Purpose: The Principal Data Scientist will drive breakthrough AI innovation for pan-L&T businesses, by developing the scalable, responsible, and production-ready Gen AI and Agentic AI models using RAG with multi LLMs and derived domain specific models where applicable. This role involves cutting-edge research, model deployment, AI infrastructure optimization, and AI strategy formulation to enhance business capabilities and user experiences. Key Responsibilities: AI Model Development & Deployment: Design, train, and deploy ML/DL models for predictive analytics, NLP, computer vision, generative AI and AI Agents. Applied AI Research & Innovation: Explore emerging AI technologies such as LLMs, RAG (Retrieval-Augmented Generation), fine tuning techniques, multi-modal AI, agentic architectures, reinforcement learning, and self-supervised learning with application-orientation. Model Optimization & Scalability: Optimize AI models for inference efficiency, explainability, and responsible AI compliance. AI Product Integration: Work collaboratively with business teams, data engineers, ML Ops teams, and software developers to integrate AI models into applications, APIs, and cloud platforms. AI Governance & Ethics: Ensure compliance with AI fairness, bias mitigation, and regulatory frameworks (GDPR, CCPA, AI Act). Cross-functional Collaboration: Partner with business teams, UX researchers, and domain experts to align AI solutions with real-world applications. AI Infrastructure & Automation: Develop automated pipelines, Agentic model monitoring, and CI/CD for AI Solutions . Technical Expertise: Machine Learning & Deep Learning: TensorFlow, PyTorch, Scikit-learn, Regression, Classification, clustering, Ensembling techniques – bagging, boosting, recommender systems, Probability distributions and data visualizations Generative AI & LLMs: OpenAI GPT, Google Gemini, Llama, Hugging Face Transformers, RAG, CAG, KAG, knowledge on Other LLMs, SLM, VLM, LCM, Langchain NLP & Speech AI: BERT, T5, Whisper Computer Vision: YOLO, OpenCV, CLIP, Convolutional Neural Networks(CNN). ML Ops & AI Infrastructure: ML flow, Kubeflow, Azure ML. Data Platforms : Databricks, Pinecone, FAISS, Elasticsearch, Semantic Search, Milvus, Weaviate Cloud AI Services: Azure Open AI & Cognitive Services. Explainability & Responsible AI: SHAP, LIME, FairML. Prompt Engineering Publish internal/external research papers, contribute to AI patents, and present at industry conferences and workshops. Evaluate Open-Source AI/ML frameworks and commercial AI products to enhance the organization’s AI capabilities. Behavioural Attributes: Business Acumen – Ability to align AI solutions with business goals. Market Foresight – Identifying AI trends and emerging technologies. Change Management – Driving AI adoption in dynamic environments. Customer Centricity – Designing AI solutions with user impact in mind. Collaborative Leadership – Working cross-functionally with diverse teams. Ability to Drive Innovation & Continuous Improvement – Research-driven AI development. Key Value Drivers: Advancing AI-driven business transformation – securely, optimally and at scale Reducing time-to-value for AI-powered innovations. Enabling AI governance, compliance, and ethical AI adoption. Future Career Path : The future career path will be establishing oneself as a world class SME having deep domain experience and thought leadership around application of next gen AI technologies in the construction, energy and manufacturing domains. Career would progress from technology specialization into leadership roles, in various challenging positions such as Head – AI Strategy and Chief AI Officer, in driving the enterprise AI strategies, governance, and innovation for various ICs/BUs across the pan L&T Businesses.
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
At EG, we are dedicated to developing software solutions that enable our customers to focus on their profession while we handle the intricacies of technology. Our industry-specific software is crafted by professionals who understand the sector intricately, supported by the stability, innovation, and security provided by EG. We are on a mission to drive industries forward by addressing significant challenges such as resource optimization, efficiency enhancement, and sustainability promotion. With a thriving global workforce exceeding 3000 employees, including a 700+ strong team located in Mangaluru, India, we foster a people-first culture that encourages innovation, collaboration, and continuous learning. We invite individuals to join us in the journey of creating software that serves people rather than making people work for it. EG Healthcare, a division of EG, is dedicated to building intelligent solutions that enhance healthcare services across the Nordics and Europe. Our goal is to simplify complexities, empower care providers, and improve patient outcomes through technological innovation. Our core values revolve around collaboration, curiosity, and purpose-driven progress. As a Senior Software Developer at EG Healthcare, you will leverage your passion for software engineering, backed by over 8 years of experience, to develop impactful solutions using cutting-edge technologies. Your role will involve designing and implementing robust, scalable software solutions utilizing Java and associated technologies. You will be responsible for creating and maintaining RESTful APIs for seamless system integration, collaborating with diverse teams to deliver high-impact features, ensuring code quality through best practices and testing, and contributing to architectural decisions and technical enhancements. Key Responsibilities: - Design and develop robust, scalable software solutions using Java and related technologies. - Build and maintain RESTful APIs for seamless integration across systems. - Collaborate with cross-functional teams to deliver high-impact features. - Ensure code quality through best practices, automated testing, and peer reviews. - Utilize Docker, Kubernetes, and CI/CD pipelines for modern DevOps workflows. - Troubleshoot issues efficiently and provide timely solutions. - Contribute to architectural decisions and technical improvements. Must-Have Skills: - Proficiency in Java (8+ years of professional experience). - Experience with Spring Boot for backend service development. - Strong understanding of REST API design and implementation. - Familiarity with Docker and Kubernetes. - Exposure to Event Sourcing / CQRS patterns. - Hands-on experience with front-end technology, preferably React.js. - Proficient in relational databases, Git, and testing practices. Good-to-Have Skills: - Knowledge of ElasticSearch for advanced search and indexing. - Experience with Axon Framework for distributed systems in Java. - Familiarity with tools like Grafana for observability. - Exposure to ArgoCD for GitOps-based deployments. - Full-stack mindset or experience collaborating with front-end teams. Who You Are: - Analytical and structured thinker. - Reliable, self-driven, and team-oriented. - Strong communication skills. - Eager to contribute to a meaningful mission in healthcare. What We Offer: - Competitive salary and benefits. - Opportunities for professional growth. - Collaborative, innovative work culture. Join us at EG Healthcare and become a part of a team that is dedicated to building smarter healthcare solutions for the future.,
Posted 1 week ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About the Role: We are looking for a seasoned DevOps Engineer with 8–10 years of experience to lead our deployment processes and infrastructure strategies. This role is ideal for someone with deep knowledge of cloud platforms, automation tools, microservices architecture, and CI/CD pipelines. You will play a key role in ensuring scalability, security, and performance across our systems. Key Responsibilities: Take end-to-end ownership of CI/CD pipelines and infrastructure deployment. Architect and manage scalable cloud solutions (preferably GCP) for microservices-based applications. Collaborate with engineering, QA, and product teams to streamline release cycles. Monitor, troubleshoot, and optimize system performance and uptime. Build and maintain containerization using Docker and orchestration with Kubernetes. Implement infrastructure-as-code using Terraform, Ansible, or equivalent tools. Ensure effective deployment and scaling of microservices. Drive automation in infrastructure, monitoring, and alerting systems. Conduct root cause analysis and resolve critical production issues. Required Skills & Qualifications: 8–10 years of experience in DevOps or similar engineering roles. Strong command of microservices deployment and management in cloud environments. Expertise in GCP, AWS, or Azure. Proficiency in Git, GitHub workflows, and CI/CD tools (Jenkins, GitLab CI/CD). Knowledge of containerization (Docker) and orchestration (Kubernetes). Strong scripting skills in Shell, Python, or similar. Familiarity with JavaScript frameworks (Node.js, React) and their deployments. Experience with databases (SQL and NoSQL), and tools like Elasticsearch, Hive, Spark, or Presto. Understanding of secure development practices and information security standards. **Preferred: Immediate joiners and local candidates from Noida
Posted 1 week ago
0.0 - 3.0 years
15 - 20 Lacs
Gurugram, Haryana
On-site
We are seeking a highly experienced and motivated Tech Lead – Node.js to join our fast-growing engineering team in the E-commerce domain. This is a unique leadership opportunity to architect and build robust backend systems, mentor talented developers, and play a pivotal role in shaping the technical direction of our product. As a Tech Lead, you will be at the forefront of innovation, leading the design and implementation of microservices and high-performance systems that power a next-gen e-commerce platform. Responsibilities : Lead backend architecture and development using Node.js and microservices-based design. Design, build, and scale backend services, APIs, and real-time data processing pipelines. Spearhead technical discussions, architecture reviews, and make strategic decisions on scalability and system design. Collaborate with cross-functional teams including product managers, DevOps, frontend developers, and QA. Optimize MySQL databases and implement ecient querying using Sequelize ORM. Implement advanced search functionality with Elasticsearch. Ensure robust CI/CD pipelines and work closely with DevOps for cloud infrastructure optimization (AWS preferred). Enforce best practices for code quality, performance, security, and testing. Mentor and guide a team of backend engineers, promoting a culture of ownership, innovation, and continuous improvement. Required Skills and Qualifications: 6+ years of professional experience in backend development with a strong focus on Node.js. Proven experience in designing and scaling backend systems in the E-commerce domain. Solid understanding of Express.js, Sequelize ORM, and MySQL. Experience working with Elasticsearch for search indexing and querying. Proficient in microservices architecture, event-driven systems, and RESTful API development. Deep knowledge of AWS cloud services, CI/CD pipelines, and containerization tools like Docker. Strong debugging skills and experience troubleshooting performance and scalability issues. Excellent communication and leadership abilities – capable of mentoring junior developers and leading project initiatives. Job Types: Full-time, Permanent Pay: ₹1,500,000.00 - ₹2,000,000.00 per year Benefits: Flexible schedule Health insurance Paid sick time Paid time off Provident Fund Ability to commute/relocate: Gurugram, Haryana: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): What is your current CTC? What is your expected CTC? What is your official Notice period? Experience: Node.js: 6 years (Preferred) Microservices: 5 years (Preferred) MySQL: 5 years (Preferred) E-commerce domain: 4 years (Preferred) AWS: 4 years (Preferred) Sequelize ORM: 4 years (Preferred) Team Handling: 3 years (Preferred) Work Location: In person
Posted 1 week ago
14.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Backdrop AVIZVA is a Healthcare Technology Organization that harnesses technology to simplify, accelerate, & optimize the way healthcare enterprises deliver care. Established in 2011, we have served as strategic enablers for healthcare enterprises, helping them enhance their overall care delivery. With over 14 years of expertise, we have engineered more than 150 tailored products for leading Medical Health Plans, Dental and Vision Plan Providers, PBMs, Medicare Plan Providers, TPAs, and more. Overview Of The Role As a System Analyst within a product development team in AVIZVA, you will be one of the front- liners of the team spearheading your product’s solution design activities alongside the product owners, system architect, lead developers while collaborating with all business & technology stakeholders Job Responsibilities Gather & analyze business, functional, data requirements with the PO, & relevant stakeholders and derive system requirements from the same. Work with the system architect to develop an understanding of the product's architecture, components, interactions, flows, and build clarity around the technological nuances & constructs involved. Develop an understanding of the various datasets relevant to the industry, their business significance and logical structuring from a data modeling perspective. Conduct in-depth industry research around datasets pertinent to the underlying problem statements. Identify, (data) model & document the various entities, relationships & attributes alongwith appropriate cardinality and normalization. Apply ETL principles to formulate & document data dictionaries, business rules, transformation & enrichment logic, for various datasets in question pertaining to various source & target systems in context. Define data flow, validations & business rules driving the interchange of data between components of a system or multiple systems. Define requirements around system integrations and exchange of data such as systems involved, services (APIs) involved, nature of integration, handshake details (data involved, authentication, etc.) Identify use-cases for exposure of data within an entity/dataset via APIs and define detailed API signatures and create API documentation. Provide clarifications to the development team around requirements, system design, integrations, data flows, scenarios. Support to other product teams dependent on the APIs, integrations defined by your product team, in understanding the endpoints, logics, business, entity structure etc. Provide backlog grooming support to the Product Owner through activities such as functional analysis and data analysis. Skills & Qualifications Bachelor’s or Master’s degree in Computer Science or any other analytically inclined field of study. At least 4 years of relevant experience in roles such as Business Analyst, Systems Analyst or Business System Analyst. Experience in analysing & defining systems involving varying levels of complexity in terms of underlying components, data, integrations, flows, etc. Experience working with data (structured, semi-structureed), data modeling, writing database queries with hands-on SQL, and working knowledge of Elasticsearch indexes. Experience with Unstructured data will be a huge plus. Experience of identifying & defining entities & APIs, writing API specifications, & API consumer specifications. Ability to map data from various sources to various consumer endpoints such as a system, a service, UI, process, sub-process, workflow etc. Experience with data management products based on ETL principles, involving multitudes of datasets, disparate data sources and target systems. A strong analytical mindset with a proven ability to understand a variety of business problems through stakeholder interactions and other methods to ideate the most aligned and appropriate technology solutions. Exposure to diagrammatic analysis & elicitation of business processes, data & system flows using BPMN & UML diagrams, such as activity flow, use-cases, sequence diagrams, DFDs, etc. Exposure to writing requirements documentation such as BRD, FRD, SRS, Use-Cases, User-Stories etc. An appreciation for the systems’ technological and architectural concepts with an ability to speak about the components of an IT system, inter-component interactions, database, external and internal data sources, data flows & system flows. Experience (at least familiarity) of working with the Atlassian suite (Jira, & Confluence). Experience in product implementations & customisations through system configurations will be an added plus. Experience of driving UX design activities in collaboration with graphic & UI design teams, by means of enabler tools such as Wireframes, sketches, flow diagrams, information architecture etc. will be an added plus. Exposure to UX designing & collaboration tools such as Figma, Zeplin, etc. will be an added plus. Awareness or prior exposure to Healthcare & Insurance business & data will be a huge advantage.
Posted 1 week ago
10.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Job Title: Engineering Manager - Search Engineering Team Team Overview: The Search Team at Flipkart is dedicated to refining and optimizing the platform's search infrastructure to facilitate seamless navigation and discovery for users across its extensive product catalog. Building next generation search for Indian ecommerce. Tasked with enhancing search algorithms, improving relevance ranking, and implementing personalized search functionalities, the team operates at the intersection of technology and user experience. Collaborating closely with product managers, data scientists, and cross-functional teams, they translate business objectives into technical solutions that elevate the overall shopping journey. Moreover, the team continually strives to enhance the reliability, scalability, and performance of Flipkart's search systems, ensuring they can adeptly manage the platform's burgeoning search volumes. Through their commitment to innovation and teamwork, the Search Team plays an instrumental role in fostering customer satisfaction and loyalty within the Flipkart ecosystem. Position Overview: As an Engineering Manager for the Search Engineering Team at Flipkart, you will lead a team of skilled engineers in optimizing search functionalities and driving innovation in our search algorithms. You will be responsible for overseeing the end-to-end development process, from ideation to deployment, ensuring the delivery of high-quality, scalable, and efficient search solutions. Key Responsibilities: Build and nurture a world class team of engineers dedicated to pushing the boundaries of innovation and delivering industry-leading solutions in e-commerce search. Foster a culture of continuous experimentation and learning, encouraging the team to iterate rapidly and leverage data-driven insights to drive improvements in search relevance and user satisfaction. Define and execute the search engineering roadmap, aligning with business objectives and solving user needs. Collaborate closely with cross-functional teams, including product management, design, and data science, to prioritize features and enhancements. Provide technical expertise and guidance in search algorithms, query understanding, query rewriting and indexing techniques. Oversee the design, development, testing, and deployment of search-related features, ensuring adherence to quality standards and best practices. Monitor search performance metrics and implement optimizations to enhance search relevance and speed. Act as a strategic advisor to senior leadership, providing insights and recommendations on search-related initiatives and opportunities. Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 10+ years of experience in software engineering, with a focus on search technologies. 2+ years of experience in a technical leadership or management role, preferably managing engineering teams of 8-12 people. Strong understanding of search algorithms, indexing techniques, relevance and ranking. Experience with search platforms and frameworks such as Elasticsearch, Solr, or Lucene. Proficiency in programming languages such as Java, Python, or C++. Excellent communication and interpersonal skills, with the ability to collaborate effectively across teams and influence stakeholders. Proven track record of delivering complex projects on schedule and within budget. Passion for technology and a commitment to staying abreast of industry trends and advancements. Experience with applied ML in one or more of the following domains, a plus: search, recommender systems, NLP or computer vision.
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
hyderabad, telangana
On-site
NxtWave is a fast-growing edtech startup in India, dedicated to revolutionizing the way students learn and shape their careers in the tech industry. We have established a vibrant community of learners nationwide and are committed to developing cutting-edge products that facilitate the acquisition of industry-ready skills on a large scale. As a member of our team, your responsibilities will include designing, implementing, and delivering user-centric features that span across frontend, backend, and database systems, all while following guidance. You will be tasked with defining and deploying RESTful/GraphQL APIs and creating efficient, scalable database schemas. Furthermore, you will be expected to construct reusable and maintainable frontend components using contemporary state management practices. Your role will also involve developing backend services in Node.js or Python, while adhering to clean-architecture principles. Ensuring code quality and reliability through the writing and upkeep of unit, integration, and end-to-end tests will be a crucial aspect of your work. Additionally, you will containerize applications, configure CI/CD pipelines for automated builds and deployments, and uphold secure coding practices, accessibility standards (WCAG), and SEO fundamentals. Collaboration with Product, Design, and engineering teams to grasp and execute feature requirements effectively will be an essential part of your role. The ideal candidate should possess a minimum of 1 year of experience in constructing full-stack web applications. Proficiency in JavaScript (ES6+), TypeScript, HTML5, and CSS3 (Flexbox/Grid) is required. Advanced familiarity with React (Hooks, Context, Router) or an equivalent modern UI framework is essential. Practical experience with state management patterns such as Redux, MobX, or custom solutions is necessary. Strong backend skills in Node.js (Express/Fastify) or Python (Django/Flask/FastAPI) are expected. Expertise in designing REST and/or GraphQL APIs and integrating them with backend services is a must. A solid understanding of MySQL/PostgreSQL and some knowledge of NoSQL stores like Elasticsearch and Redis is preferred. Proficiency in using build tools (Webpack, Vite), package managers (npm/Yarn), and Git workflows is necessary. The ability to write and maintain tests with tools like Jest, React Testing Library, Pytest, and Cypress is advantageous. Familiarity with Docker, CI/CD tools (GitHub Actions, Jenkins), and basic cloud deployments is a plus. A product-first mindset, coupled with strong problem-solving, debugging, and communication skills, will be highly valued. This position is based at our office in Hyderabad and requires on-site presence.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
kochi, kerala
On-site
The ideal candidate should possess strong skills in full-stack web application development using various languages such as Javascript, TypeScript, and React. You should showcase proven expertise in HTML5, CSS3, and JavaScript, along with a solid understanding of the Document Object Model (DOM). Previous experience in working with a custom theme and/or the Storefront API is highly desirable. Additionally, you should have a working knowledge of Shopify's theming system and Liquid templating. Moreover, the candidate should have prior experience in implementing and debugging third-party Shopify apps, as well as the ability to build unique solutions when necessary. Proficiency in vanilla JavaScript, jQuery, ES2015/ES6, and current JavaScript frameworks is essential. Familiarity with Shopify's object/properties, AJAX API, and Meta fields is also required. It would be advantageous to have knowledge in at least one of the cloud provider services such as AWS, Azure, or GCP. The candidate should demonstrate a solid grasp of object-oriented design principles and be able to apply them effectively in coding. Familiarity with a full stack or backend framework like Rails, Django, Spring, or similar is preferred, as well as experience with a JavaScript framework like React, Vue, Angular, or similar. Furthermore, understanding of agile, lean, and DevOps principles, including testing, Continuous Integration/Continuous Delivery (CI/CD), and observability is essential. Experience in working with streaming media or wellness verticals would be a plus. Proficiency in four or more aspects of the tech stack, including Ruby, Ruby on Rails, Javascript/TypeScript, Postgres, Arango, Elasticsearch, Redis, Heroku, AWS, New Relic, Docker, Kubernetes, and Stripe APIs, is required. Additionally, it is good to have experience in any Cloud Service Provider (CSP) architecture and implementation, including deployment and scaling. Hands-on experience in Continuous Integration/Continuous Delivery (CI/CD) models such as DevOps, Git, CI/CD pipelines, and Infrastructure as Code (IaC) is beneficial. Familiarity with Cloud Services, DevOps, Docker images, containers, Kubernetes, distributed cache (Redis), and distributed Application Performance Monitoring (APM) solutions is also advantageous.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
delhi
On-site
You will be responsible for collaborating with cross-functional teams to design, develop, and maintain high-quality software solutions using Python, Django (including DRF), FastAPI, and other modern frameworks. Your main tasks will include building robust and scalable REST APIs to ensure efficient data transfer and seamless integration with frontend and third-party systems. Additionally, you will utilize Redis for caching, session management, and performance optimization. Your role will also involve designing and implementing scalable ETL pipelines to efficiently process and transform large datasets across systems. You will be integrating and maintaining Kafka for building real-time data streaming and messaging services, as well as implementing Elasticsearch for advanced search capabilities, data indexing, and analytics functionalities. Containerizing applications using Docker for easy deployment and scalability will be part of your responsibilities. Furthermore, you will be designing and managing PostgreSQL databases to ensure data integrity and performance tuning. Writing clean, efficient, and well-documented code following best practices and coding standards is essential. You will participate in system design discussions, contribute to architectural decisions, particularly around data flow and microservices communication, and troubleshoot and debug complex software issues to ensure smooth operation of production systems. Profiling and optimizing Python code for improved performance and scalability, as well as implementing and maintaining CI/CD pipelines for automated testing, will also be part of your role. You should have 2-4 years of experience in backend development using Python, strong proficiency in Django, DRF, and RESTful API development, and experience with FastAPI, asyncio, and modern Python libraries. Solid understanding of PostgreSQL and relational database concepts, proficiency with Redis for caching and performance optimization, hands-on experience with Docker and container orchestration, and familiarity with Kafka for real-time messaging and event-driven systems are required. Experience in implementing and maintaining ETL pipelines for structured/unstructured data, working knowledge of Elasticsearch for search and data indexing, exposure to AWS services (e.g., EC2, S3, RDS) and cloud-native development, understanding of Test-Driven Development (TDD) and automation frameworks, strong grasp of Git and collaborative development practices, excellent communication skills, a team-oriented mindset, and experience with Agile development will be beneficial. In return, we offer you the opportunity to shape the future of unsecured lending in emerging markets, a competitive compensation package, professional development and growth opportunities, a collaborative, innovation-focused work environment, comprehensive health and wellness benefits, immediate joining possible, and the chance to work from the office only. The position is based in Gurugram, Sector 65.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Senior Auditor, Technology at LegalZoom, you will be an impactful member of the internal audit team, assisting in achieving the department's mission and objectives. Your role will involve evaluating technology risks in a dynamic environment, assessing the design and effectiveness of internal controls over financial reporting, and ensuring compliance with operational and regulatory requirements. You will document audit procedures and results following departmental standards and execute within agreed timelines. Additionally, you will provide advisory support to stakeholders on internal control considerations, collaborate with external auditors when necessary, and focus on continuous improvement of the audit department. Your commitment to integrity and ethics, coupled with a passion for the internal audit profession and LegalZoom's mission, are essential. Ideally, you hold a Bachelor's degree in computer science, information systems, or accounting, along with 3+ years of experience in IT internal audit and Sarbanes-Oxley compliance, particularly in the technology sector. Previous experience in a Big 4 accounting firm and internal audit at a public company would be advantageous. A professional certification such as CISA, CIA, CRISC, or CISSP is preferred. Strong communication skills, self-management abilities, and the capacity to work on multiple projects across different locations are crucial for this role. Familiarity with technologies like Oracle Cloud, AWS, Salesforce, Azure, and others is beneficial, along with reliable internet service for remote work. Join LegalZoom in making a difference and contributing to the future of accessible legal advice for all. LegalZoom is committed to diversity, equality, and inclusion, offering equal employment opportunities to all employees and applicants without discrimination based on any protected characteristic.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
NTT DATA is looking to hire an Azure Cloud Engineer to join their team in Bangalore, Karnataka, India. As an Azure Cloud Engineer, you will be responsible for working in the Banking Domain as an Azure consultant. You should hold a Bachelors/Masters degree in Computer Science or Data Science, along with 5 to 8 years of experience in software development and data structures/algorithms. The ideal candidate will have 5 to 7 years of experience with programming languages such as Python or JAVA, database languages like SQL, and no-sql. Additionally, you should have 5 years of experience in developing large-scale platforms, distributed systems or networks, and be familiar with compute technologies and storage architecture. A strong understanding of microservices architecture is essential for this role. Experience in building AKS applications on Azure, as well as a deep understanding of Kubernetes for availability and scalability of applications in Azure Kubernetes Service, is required. You should also have experience in building and deploying applications with Azure using third-party tools like Docker, Kubernetes, and Terraform. The role will involve working with AKS clusters, VNETs, NSGs, Azure storage technologies, Azure container registries, etc. Good understanding of building Redis, ElasticSearch, and MongoDB applications is preferred, along with experience with RabbitMQ. An end-to-end understanding of ELK, Azure Monitor, DataDog, Splunk, and logging stack is beneficial. Candidates should have experience with development tools, CI/CD pipelines such as GitLab CI/CD, Artifactory, Cloudbees, Jenkins, Helm, Terraform, etc. Understanding of IAM roles on Azure and integration/configuration experience is required, preferably with experience working on Data Robot setup or similar applications on Cloud/Azure. Experience in functional, integration, and security testing, as well as performance validation, is also necessary for this role. NTT DATA is a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. As a Global Top Employer, NTT DATA has diverse experts in more than 50 countries and a robust partner ecosystem. Their services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is a leading provider of digital and AI infrastructure globally, committed to helping organizations and society move confidently into the digital future.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
indore, madhya pradesh
On-site
At ClearTrail, you will be part of a team dedicated to developing solutions that empower those focused on ensuring the safety of individuals, locations, and communities. For over 23 years, ClearTrail has been a trusted partner of law enforcement and federal agencies worldwide, committed to safeguarding nations and enhancing lives. We are leading the way in the future of intelligence gathering through the creation of innovative artificial intelligence and machine learning-based lawful interception and communication analytics solutions aimed at addressing the world's most complex challenges. We are currently looking for a Big Data Java Developer to join our team in Indore with 2-4 years of experience. As a Big Data Java Developer at ClearTrail, your responsibilities will include: - Designing and developing high-performance, scalable applications using Java and big data technologies. - Building and maintaining efficient data pipelines for processing large volumes of structured and unstructured data. - Developing microservices, APIs, and distributed systems. - Experience working with Spark, HDFS, Ceph, Solr/Elasticsearch, Kafka, and Delta Lake. - Mentoring and guiding junior team members. If you are a problem-solver with strong analytical skills, excellent verbal and written communication abilities, and a passion for developing cutting-edge solutions, we invite you to join our team at ClearTrail and be part of our mission to make the world a safer place.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
As a Software Developer 2 (FSD), you will be responsible for leading the design and delivery of complex end-to-end features across frontend, backend, and data layers. Your role will involve making strategic architectural decisions, reviewing and approving pull requests, enforcing clean-code guidelines, SOLID principles, and design patterns. Additionally, you will build and maintain shared UI component libraries and backend service frameworks for team reuse. Identifying and eliminating performance bottlenecks in both browser rendering and server throughput will be a crucial part of your responsibilities. You will also be instrumental in instrumenting services with metrics and logging, defining and enforcing comprehensive testing strategies, and owning CI/CD pipelines for automating builds, deployments, and rollback procedures. Ensuring OWASP Top-10 mitigations, WCAG accessibility, and SEO best practices will be key aspects of your role. Furthermore, you will partner with Product, UX, and Ops teams to translate business objectives into technical roadmaps. Facilitating sprint planning, estimation, and retrospectives for predictable deliveries will be part of your routine. Mentoring and guiding SDE-1s and interns, as well as participating in hiring processes, will also be part of your responsibilities. To qualify for this role, you should have at least 3-5 years of experience building production Full stack applications end-to-end with measurable impact. Strong leadership skills in Agile/Scrum environments, proficiency in React (or Angular/Vue), TypeScript, and modern CSS methodologies are required. You should be proficient in Node.js (Express/NestJS) or Python (Django/Flask/FastAPI) or Java (Spring Boot). Expertise in designing RESTful and GraphQL APIs, scalable database schemas, as well as knowledge of MySQL/PostgreSQL indexing, NoSQL databases, and caching are essential. Experience with containerization (Docker) and AWS services such as Lambda, EC2, S3, API Gateway is preferred. Skills in unit/integration and E2E testing, frontend profiling, backend tracing, and secure coding practices are also important. Strong communication skills, the ability to convey technical trade-offs to non-technical stakeholders, and experience in providing constructive feedback are assets for this role. In addition to technical skills, we value qualities such as a commitment to delivering high-quality software, collaboration abilities, determination, creative problem-solving, openness to feedback, eagerness to learn and grow, and strong communication skills. This position is based in Hyderabad.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As the Cloud DevOps Manager at our organization, you will be leading a team of engineers in the design, implementation, maintenance, and troubleshooting of Linux-based systems. Your primary responsibilities will include ensuring system stability, driving innovation, and delivering high-quality infrastructure solutions that align with our organization's goals. Your role will involve leadership and team management, where you will lead and mentor a team of Linux engineers, provide technical guidance, and manage workload distribution to ensure timely project completion. Collaboration with cross-functional teams will be essential to align IT infrastructure with organizational objectives. You will be responsible for architecting, deploying, and managing robust Linux-based environments, including servers, networking, and storage solutions. Ensuring scalability, reliability, and security of Linux systems will be a key focus, along with overseeing the automation of system deployment and management processes using tools such as Ansible, Puppet, or Chef. Maintenance and troubleshooting will also be a significant part of your role, where you will lead efforts in monitoring, maintaining, and optimizing system performance. Proactively identifying and resolving technical issues to prevent system outages is crucial. Security and compliance will be another important aspect of your responsibilities, as you will implement and maintain security best practices for Linux systems, ensure compliance with industry standards and regulations, and handle documentation and reporting of systems, processes, and procedures. Continuous improvement is encouraged in this role, where staying updated with the latest Linux technologies, tools, and practices is essential. Leading initiatives to enhance the efficiency, reliability, and security of Linux environments, including the adoption of cloud technologies and containerization, will be part of your responsibilities. To qualify for this position, you should have a Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent work experience. Additionally, you should possess at least 10 years of experience in Linux system administration, with a minimum of 2 years in a leadership or senior technical role. Deep understanding of Linux operating systems, networking principles, automation tools, scripting languages, virtualization, containerization, and cloud platforms is required. Strong problem-solving skills, excellent communication, and interpersonal abilities are also necessary for this role. This is a full-time, permanent position offering benefits such as health insurance, provident fund, yearly bonus, and a day shift schedule. If you meet the qualifications and are ready to take on this challenging role, we look forward to receiving your application.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
As an SDE-3 Backend at Crimson Enago, you will lead a team of web developers and play a major role in end-to-end project development and delivery. Your responsibilities will include setting engineering examples, hiring and training the team, and coding 100% of the time. You will collaborate with the Engineering Manager, Principal Engineer, SDE-3 leads, and Technical Project Manager to ensure successful project outcomes. The ideal candidate for this position is an SDE-3 Backend with over 5 years of enterprise backend web experience, particularly in the NodeJS-AWS stack. You should possess excellent research skills to address complex business problems, experience in unit and integration testing, and a commitment to maintaining highly performant and testable code following software design patterns. Your role will involve designing optimized scalable solutions, breaking down complex problems into manageable tasks, and conducting code reviews to ensure quality and efficiency. In addition to your technical skills, you should have a strong background in backend technologies such as Postgres, MySQL, MongoDB, Redis, Kafka, Docker, Kubernetes, and CI/CD. Experience with AWS technologies like Lambda functions, DynamoDB, and SQS is essential. You should be well-versed in HTML5, CSS3, and CSS frameworks, and prioritize developer tooling, testing, monitoring, and observability in your work. Collaboration and effective communication with team members, product managers, and stakeholders are key aspects of this role. If you have a proven track record of architecting cost-efficient and scalable solutions, backend development experience, and a passion for delivering customer value, we encourage you to apply. Experience with Elasticsearch server cluster optimization and Apache Spark/Ray would be an added advantage. Join us at Crimson Enago to revolutionize the research industry and make a positive impact on the world through innovative technology and collaborative team efforts. For more information, visit our websites: - Trinka: www.trinka.ai - RAx: www.raxter.io - Crimson Enago: www.crimsoni.com,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
maharashtra
On-site
At Crimson Enago, we are dedicated to developing AI-powered tools and services that enhance the productivity of researchers and professionals. Through our flagship products Trinka and RAx, we aim to streamline the stages of knowledge discovery, acquisition, creation, and dissemination. Trinka is an AI-driven English grammar checker and writing assistant tailored for academic and technical writing. Crafted by linguists, scientists, and language enthusiasts, Trinka identifies and rectifies numerous intricate writing errors, including contextual spelling mistakes, advanced grammar issues, and vocabulary enhancements in real-time. Moreover, Trinka offers writing suggestions to ensure professional, concise, and engaging content. With subject-specific corrections, Trinka guarantees that the writing aligns with the subject matter, and its Enterprise solutions provide unlimited access and customizable features. RAx is a smart workspace designed to support researchers, including students, professors, and corporate researchers, in their projects. Powered by proprietary AI algorithms, RAx serves as an integrated workspace for research endeavors, connecting various sources of information to user behaviors such as reading, writing, annotating, and discussions. This synergy reveals new insights and opportunities in the academic realm, revolutionizing traditional research practices. Our team comprises passionate researchers, engineers, and designers united by the vision of transforming research-intensive projects. By alleviating cognitive burdens and facilitating the conversion of information into knowledge, we strive to simplify and enrich the research process. With a focus on scalability, data processing, AI integration, and global user interactions, our engineering team aims to empower individuals worldwide in their research pursuits. As a Principal Engineer Fullstack at Trinka, you will lead a team of web developers, driving top-tier engineering standards and overseeing end-to-end project development and delivery. Collaborating with the Engineering Manager, Principal Engineer, SDE-3 leads, and Technical Project Manager, you will play a pivotal role in team management, recruitment, and training. Your primary focus will be hands-on coding, constituting a significant portion of your daily responsibilities. Ideal candidates for this role at Trinka possess over 7 years of enterprise frontend-full-stack web experience, with expertise in the AngularJS-Java-AWS stack. Key characteristics we value include exceptional research skills, a commitment to testing and code quality, a penchant for scalable solutions, adeptness at project estimation and communication, and proficiency in cloud infrastructure optimization. Additionally, candidates should exhibit a keen eye for detail, a passion for user experience excellence, and a collaborative spirit essential for high-impact project delivery. Experience requirements for this role encompass proven expertise in solution architecting, frontend-full-stack development, backend technologies, AWS services, HTML5, CSS3, CSS frameworks, developer workflows, testing practices, and collaborative software engineering. A deep-rooted interest in profiling, impact analysis, root cause analysis, and Elasticsearch server cluster optimization is advantageous, reflecting a holistic approach to software development and problem-solving. Join us at Crimson Enago and be part of a dynamic team committed to reshaping research practices and empowering professionals worldwide with innovative tools and services.,
Posted 1 week ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionises customer engagement by transforming contact centres into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organisations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the Overview : We seek an experienced Staff Software Engineer to lead the design and development of our data warehouse and analytics platform in addition to helping raise the engineering bar for the entire technology stack at Level AI, including applications, platform, and infrastructure. They will actively collaborate with team members and the wider Level AI engineering community to develop highly scalable and performant systems. They will be a technical thought leader who will help drive solving complex problems of today and the future by designing and building simple and elegant technical solutions. They will coach and mentor junior engineers and drive engineering best practices. They Will Actively Collaborate With Product Managers And Other Stakeholders Both Inside And Outside The Youll Get To Do At Level AI (and More As We Grow Together) Design, develop, and evolve data pipelines that ingest and process high-volume data from multiple external and internal sources. Build scalable, fault-tolerant architectures for both batch and real-time data workflows using tools like GCP Pub/Sub, Kafka and Celery. Define and maintain robust data models with a focus on domain-oriented design, supporting both operational and analytical workloads. Architect and implement data lake/warehouse solutions using Postgres and Snowflake. Lead the design and deployment of workflow orchestration using Apache Airflow for end-to-end pipeline automation. Ensure platform reliability with strong monitoring, alerting, and observability for all data services and pipelines. Collaborate closely with Other internal product & engineering teams to align data platform capabilities with product and business needs. Own and enforce data quality, schema evolution, data contract practices, and governance standards. Provide technical leadership, mentor junior engineers, and contribute to cross-functional architectural love to explore more about you if you have : 8+ years of experience building large-scale data systems; preferably in high-ingestion, multi-source environments. Strong system design, debugging, and performance tuning skills. Strong programming skills in Python and Java. Deep understanding of SQL (Postgres, MySQL) and data modeling (star/snowflake schema, Hands-on experience with streaming platforms like Kafka and GCP Pub/Sub. Expertise with Airflow or similar orchestration frameworks. Solid experience with Snowflake, Postgres, and distributed storage design. Familiarity with Celery for asynchronous task processing. Comfortable working with ElasticSearch for data indexing and querying. Exposure to Redash, Metabase, or similar BI/analytics tools. Proven experience deploying solutions on cloud platforms like GCP or : We offer market-leading compensation, based on the skills and aptitude of the Attributes : Experience with data governance and lineage tools. Demonstrated ability to handle scale, reliability, and incident response in data systems. Excellent communication and stakeholder management skills. Passion for mentoring and growing engineering talent. To learn more visit : https ://thelevel.ai/. Funding : https LinkedIn : https Our AI platform : https (ref:hirist.tech)
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
Data Dynamics is a global leader in enterprise data management, specializing in Digital Trust and Data Democracy. With a client base of over 300 organizations, including 25% of the Fortune 20, Data Dynamics is dedicated to establishing a transparent, unified, and empowered data ecosystem. The company's AI-powered self-service data management software is reshaping traditional data management practices by granting data creators of all proficiency levels ownership and control over their data. As a Regional Solutions Engineer at Data Dynamics, you will assume a pivotal role in technical pre-sales and customer management. Throughout the pre-sales phase, your responsibilities will involve engaging with customers to elicit business and technical requirements, coordinating technical information, delivering demonstrations, and showcasing product positioning capabilities. You will need to conduct thorough research into the specific needs of customers and assess the feasibility of their requirements. Your proficiency will be crucial in evaluating customers" business demands and translating them into technical and licensing requisites. In instances involving Proof of Concept (POC) or a sale, you will collaborate with the sales representative for review. For existing customers where upselling or cross-selling is the primary objective, you will closely collaborate with the customer success manager allocated to the account and the global support team as necessary. A seamless handover to the implementation team before the customer's go-live phase is essential to ensure a smooth customer experience consistently. A core expectation is your capability to independently set up and conduct demonstrations and POCs from inception. The POC process necessitates defining and agreeing upon success criteria with the customer and sales representative, alongside monitoring the progress towards meeting the customer's objectives. Post POC, you are expected to secure customer approval through a presentation to Data Dynamics Sales and the customers" decision-makers. Furthermore, following any sale globally, your presence may be required to support or lead the installation, contingent on time zones and available resources. Given your regular interactions with both new and existing customers, understanding their business visions, goals, technical capacities, and constraints, you are anticipated to advocate for customers" needs. Providing feedback on business processes, technical matters, or enhancements to the relevant teams within Data Dynamics is crucial. Driving continuous enhancements across all aspects of our operations is vital to achieving our ultimate goal of ensuring customer success and satisfaction. Your responsibilities extend to scriptwriting when necessary, bug reporting, software testing, personal activity tracking, CRM updates, and communication updates to management. Occasional travel for customer or internal meetings may be part of the role. At Data Dynamics, we are committed to delivering an exceptional customer experience and fostering a collaborative, challenging, fun, and innovative work environment. If you are a customer-centric and passionate individual with a strong commitment to building world-class, scalable, data-driven software, we would like to connect with you. Qualifications: - Proficiency in Windows and Linux, Docker, and Kubernetes - Knowledge of SQL, Postgresql, Elasticsearch, File/NAS, and Object Storage - Familiarity with Data Discovery, Data Science, OCR, NLP, Computer Vision, AI, Keyword Search, Regex, Data Governance, GDPR, HIPAA, CCPA - Experience with Microservices-based applications, ITILv4, Project Management, and Data Migration - Prior experience in Presales and Integration - Strong problem-solving, presentation, and communication skills - Ability to collaborate effectively in a team setting - Background in data management or related field is advantageous - Bachelor's degree in Computer Science, Engineering, or a related field,
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Role Uber's data infrastructure is composed of a wide variety of compute engines, scheduling/execution solutions, and storage solutions. Compute engines such as Apache Spark™, Presto®, Apache Hive™, Neutrino, Apache Flink®, etc., allow Uber to run petabyte-scale operations on a daily basis. Further, scheduling and execution engines such as Piper (Uber's fork of Apache Airflow™), Query Builder (user platform for executing compute SQLs), Query Runner (proxy layer for execution of workloads), and exist to allow scheduling and execution of compute workloads. Finally, a significant portion of storage is supported by HDFS, Google Cloud Storage (GCS),Apache Pinot™, ElasticSearch®, etc. Each engine supports thousands of executions, which are owned by multiple owners and sub-teams. With such a complex and diverse big data landscape operating at petabyte-scale and around a million applications/queries running each day, it's imperative to provide the stakeholders a holistic view of the right performance and resource consumption insights. DataCentral, is a comprehensive platform that provides users with essential insights into big data applications and queries. It empowers data platform users by offering detailed information on workflows and apps, improving productivity by reducing debugging time and improving the cost efficiency by providing detailed resource efficiency insights As an engineer in the Data Central Team, you will be solving some of the most complex problems in Observability and efficiency of Distributed Data Systems at Uber scale. What You'll Do Work with Uber data science and engineering teams to improve Observability of Batch Data use-cases at Uber. Leverage knowledge of spark internals to dramatically help improve customer's Spark job performance. Design and implement AI based solutions to improve the application debuggability. Design and implement algorithms to optimize Resource consumption without impacting reliability Design and develop prediction and forecasting models to proactively predict system degradations and failures Work with multiple partner teams within and Uber and build cross-functional solutions in a collaborative work environment. Work with the community to upstream Uber's contributions to open source and also keep our internal fork up to date What You'll Need Bachelor's degree in Computer Science or related field. 5+ years of experience building large scale distributed software systems. Solid understanding of Java for backend / systems software development. MS / PhD in Computer Science or related field. Experience managing production systems with a strong availability SLA. Experience working with Apache Spark or similar analytics technologies. Experience working with large scale distributed systems, HDFS / Yarn. Experience working with SQL Compiler, SQL Plan / Runtime Optimization. Experience working with Kubernetes
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough