Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🚀 Job Title: Lead AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the Lead AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – Lead AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world.
Posted 1 week ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Lead Backend Developer (Python & Microservices) Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience Required: 6–10 years About Darwix AI Darwix AI is at the forefront of building the future of revenue enablement through a GenAI-powered conversational intelligence and real-time agent assist platform. Our mission is to empower global sales teams to close better, faster, and smarter by harnessing the transformative power of Generative AI, real-time speech recognition, multilingual insights, and next-gen sales analytics. Backed by top venture capitalists and industry leaders, Darwix AI is scaling rapidly across India, MENA, and US markets. With a leadership team from IIT, IIM, and BITS, we are building enterprise-grade SaaS solutions that are poised to redefine how organizations engage with customers. If you are looking for a role where your work directly powers mission-critical AI applications used globally, this is your moment. Role Overview We are seeking a Lead Backend Developer (Python & Microservices) to drive the architecture, scalability, and performance of our GenAI platform's core backend services. You will own the responsibility of designing, building, and leading backend systems that are real-time, distributed, and capable of supporting AI-powered applications at scale. You will mentor engineers, set technical direction, collaborate across AI, Product, and Frontend teams, and ensure that the backend infrastructure is robust, secure, and future-proof. This is a high-ownership, high-impact role for individuals who are passionate about building world-class systems that are production-ready, scalable, and designed for rapid innovation. Key Responsibilities🔹 Backend Architecture and Development Architect and lead the development of highly scalable, modular, and event-driven backend systems using Python. Build and maintain RESTful APIs and microservices that power real-time, multilingual conversational intelligence platforms. Design systems with a strong focus on scalability, fault tolerance, high availability, and security. Implement API gateways, service registries, authentication/authorization layers, and caching mechanisms. 🔹 Microservices Strategy Champion microservices best practices: service decomposition, asynchronous communication, event-driven workflows. Manage service orchestration, containerization, and scaling using Docker and Kubernetes (preferred). Implement robust service monitoring, logging, and alerting frameworks for proactive system health management. 🔹 Real-time Data Processing Build real-time data ingestion and processing pipelines using tools like Kafka, Redis streams, WebSockets. Integrate real-time speech-to-text (STT) engines and AI/NLP pipelines into backend flows. Optimize performance to achieve low-latency processing suitable for real-time agent assist experiences. 🔹 Database and Storage Management Design and optimize relational (PostgreSQL/MySQL) and non-relational (MongoDB, Redis) database systems. Implement data sharding, replication, and backup strategies for resilience and scalability. Integrate vector databases (FAISS, Pinecone, Chroma) to support AI retrieval and embedding-based search. 🔹 DevOps and Infrastructure Collaborate with DevOps teams to deploy scalable and reliable services on AWS (EC2, S3, Lambda, EKS). Implement CI/CD pipelines, containerization strategies, and blue-green deployment models. Ensure security compliance across all backend services (API security, encryption, RBAC). 🔹 Technical Leadership Mentor junior and mid-level backend engineers. Define and enforce coding standards, architectural patterns, and best practices. Conduct design reviews, code reviews, and ensure high engineering quality across the backend team. 🔹 Collaboration Work closely with AI scientists, Product Managers, Frontend Engineers, and Customer Success teams to deliver delightful product experiences. Translate business needs into technical requirements and backend system designs. Drive sprint planning, estimation, and delivery for backend engineering sprints. Core RequirementsTechnical Skills 6–10 years of hands-on backend engineering experience. Expert-level proficiency in Python . Strong experience building scalable REST APIs and microservices. Deep understanding of FastAPI (preferred) or Flask/Django frameworks. In-depth knowledge of relational (PostgreSQL, MySQL) and NoSQL (MongoDB, Redis) databases. Experience with event-driven architectures: Kafka, RabbitMQ, Redis streams. Proficiency in containerization and orchestration: Docker, Kubernetes. Familiarity with real-time communication protocols: WebSockets, gRPC. Strong understanding of cloud platforms (AWS preferred) and serverless architectures. Good experience with DevOps tools: GitHub Actions, Jenkins, Terraform (optional). Bonus Skills Exposure to integrating AI/ML models (especially LLMs, STT, Diarization models) in backend systems. Familiarity with vector search databases and RAG-based architectures. Knowledge of GraphQL API development (optional). Experience in multilingual platform scaling (support for Indic languages is a plus). Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. Experience working in product startups, SaaS platforms, AI-based systems, or high-growth technology companies. Proven track record of owning backend architecture at scale (millions of users or real-time systems). Strong understanding of software design principles (SOLID, DRY, KISS) and scalable system architecture. What You’ll Get Ownership : Lead backend engineering at one of India's fastest-growing GenAI startups. Impact : Build systems that directly power the world's next-generation enterprise sales platforms. Learning : Work with an elite founding team and top engineers from IIT, IIM, BITS, and top tech companies. Growth : Fast-track your career into senior technology leadership roles. Compensation : Competitive salary + ESOPs + performance bonuses. Culture : High-trust, high-ownership, no-bureaucracy environment focused on speed and innovation. Vision : Be a part of a once-in-a-decade opportunity building from India for the world. About the Tech Stack You’ll Work On Languages : Python 3.x Frameworks : FastAPI (Primary), Flask/Django (Secondary) Data Stores : PostgreSQL, MongoDB, Redis, FAISS, Pinecone Messaging Systems : Kafka, Redis Streams Cloud Platforms : AWS (EC2, S3, Lambda, EKS) DevOps : Docker, Kubernetes, GitHub Actions Others : WebSockets, OAuth 2.0, JWT, Microservices Patterns Application Process Submit your updated resume and GitHub/portfolio links (if available). Shortlisted candidates will have a technical discussion and coding assessment. Technical interview rounds covering system design, backend architecture, and problem-solving. Final leadership interaction round. Offer! How to Apply 📩 careers@darwix.ai Please include: Updated resume GitHub profile (optional but preferred) 2–3 lines about why you're excited to join Darwix AI as a Lead Backend Engineer Join Us at Darwix AI – Build the AI Future for Revenue Teams, Globally! #LeadBackendDeveloper #PythonEngineer #MicroservicesArchitecture #BackendEngineering #FastAPI #DarwixAI #AIStartup #TechCareers
Posted 1 week ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Senior Python Developer Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience: 3–8 years About Darwix AI Darwix AI is one of India’s fastest-growing AI startups, transforming enterprise sales with our GenAI-powered conversational intelligence and real-time agent assist suite. Our platform is used by high-growth enterprises across India, MENA, and Southeast Asia to improve sales productivity, personalize customer conversations, and unlock revenue intelligence in real-time. We are backed by marquee VCs, 30+ angel investors, and led by alumni from IITs, IIMs, and BITS with deep experience in building and scaling products from India for the world. Role Overview As a Senior Python Developer at Darwix AI, you will be at the core of our engineering team, leading the development of scalable, secure, and high-performance backend systems that support AI workflows, real-time data processing, and enterprise-grade integrations. This role requires deep technical expertise in Python, a strong foundation in backend architecture, and the ability to collaborate closely with AI, product, and infrastructure teams. You will take ownership of critical backend modules and shape the engineering culture in a rapidly evolving, high-impact environment. Key Responsibilities System Architecture & API Development Design, implement, and optimize backend services and microservices using Python frameworks such as FastAPI, Django, or Flask Lead the development of scalable RESTful APIs that integrate with frontend, mobile, and AI systems Architect low-latency, fault-tolerant services supporting real-time sales analytics and AI inference Data Pipelines & Integrations Build and optimize ETL pipelines to manage structured and unstructured data from internal and third-party sources Integrate APIs with CRMs, telephony systems, transcription engines, and enterprise platforms like Salesforce, Zoho, and LeadSquared Lead scraping and data ingestion efforts from large-scale, dynamic web sources using Playwright, BeautifulSoup, or Scrapy AI/ML Enablement Work closely with AI engineers to build infrastructure for LLM/RAG pipelines , vector DBs , and real-time AI decisioning Implement backend support for prompt orchestration , Langchain flows , and function-calling interfaces Support model deployment, inference APIs, and logging/monitoring for large-scale GenAI pipelines Database & Storage Design Optimize database design and queries using MySQL , PostgreSQL , and MongoDB Architect and manage Redis and Kafka for caching, queueing, and real-time communication DevOps & Quality Ensure continuous delivery through version control (Git), CI/CD pipelines, testing frameworks, and Docker-based deployments Identify and resolve bottlenecks related to performance, memory, or data throughput Adhere to best practices in code quality, testing, security, and documentation Leadership & Collaboration Mentor junior developers and participate in code reviews Collaborate cross-functionally with product, AI, design, and sales engineering teams Contribute to architectural decisions, roadmap planning, and scaling strategies Qualifications 4–8 years of backend development experience in Python, with a deep understanding of object-oriented and functional programming Hands-on experience with FastAPI , Django , or Flask in production environments Proven experience building scalable microservices, data pipelines, and backend systems that support live applications Strong command over REST API architecture , database optimization, and data modeling Solid experience working with web scraping tools , automation frameworks, and external API integrations Knowledge of AI tools like Langchain , HuggingFace , Vector DBs (Pinecone, Weaviate, FAISS) , or RAG architectures is a strong plus Familiarity with cloud infrastructure (AWS/GCP) , Docker, and containerized deployments Comfortable working in fast-paced, high-ownership environments with shifting priorities and dynamic problem-solving
Posted 1 week ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🧠 Job Title: Senior Machine Learning Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 4–8 years Education : B.Tech / M.Tech / Ph.D. in Computer Science, Machine Learning, Artificial Intelligence, or related fields 🚀 About Darwix AI Darwix AI is India's fastest-growing GenAI SaaS startup, building real-time conversational intelligence and agent-assist platforms that supercharge omnichannel enterprise sales teams across India, MENA, and Southeast Asia. Our mission is to redefine how revenue teams operate by using Generative AI, LLMs, Voice AI , and deep analytics to deliver better conversations, faster deal cycles, and consistent growth. Our flagship platform, Transform+ , analyzes millions of hours of sales conversations, gives live nudges, builds AI-powered sales content, and enables revenue teams to become truly data-driven — in real time. We’re backed by marquee investors, industry veterans, and AI experts, and we’re expanding fast. As a Senior Machine Learning Engineer , you will play a pivotal role in designing and deploying intelligent ML systems that power every layer of this platform — from speech-to-text, diarization, vector search, and summarization to recommendation engines and personalized insights. 🎯 Role Overview This is a high-impact, high-ownership role for someone who lives and breathes data, models, and real-world machine learning. You will design, train, fine-tune, deploy, and optimize ML models across various domains — speech, NLP, tabular, and ranking. Your work will directly power critical product features: from personalized agent nudges and conversation scoring to lead scoring, smart recommendations, and retrieval-augmented generation (RAG) pipelines. You’ll be the bridge between data science, engineering, and product — converting ideas into models, and models into production-scale systems with tangible business value. 🧪 Key Responsibilities🔬 1. Model Design, Training, and Optimization Develop and fine-tune machine learning models using structured, unstructured, and semi-structured data sources. Work with models across domains: text classification, speech transcription, named entity recognition, topic modeling, summarization, time series, and recommendation systems. Explore and implement transformer architectures, BERT-style encoders, Siamese networks, and retrieval-based models. 📊 2. Data Engineering & Feature Extraction Build robust ETL pipelines to clean, label, and enrich data for supervised and unsupervised learning tasks. Work with multimodal inputs — audio, text, metadata — and build smart representations for downstream tasks. Automate data collection from APIs, CRMs, sales transcripts, and call logs. ⚙️ 3. Productionizing ML Pipelines Package and deploy models in scalable APIs (using FastAPI, Flask, or similar frameworks). Work closely with DevOps to containerize and orchestrate ML workflows using Docker, Kubernetes, or CI/CD pipelines. Ensure production readiness: logging, monitoring, rollback, and fail-safes. 📈 4. Experimentation & Evaluation Design rigorous experiments using A/B tests, offline metrics, and post-deployment feedback loops. Continuously optimize model performance (latency, accuracy, precision-recall trade-offs). Implement drift detection and re-training pipelines for models in production. 🔁 5. Collaboration with Product & Engineering Translate business problems into ML problems and align modeling goals with user outcomes. Partner with product managers, AI researchers, data annotators, and frontend/backend engineers to build and launch features. Contribute to the product roadmap with ML-driven ideas and prototypes. 🛠️ 6. Innovation & Technical Leadership Evaluate open-source and proprietary LLM APIs, AutoML frameworks, vector databases, and model inference techniques. Drive innovation in voice-to-insight systems (ASR + Diarization + NLP). Mentor junior engineers and contribute to best practices in ML development and deployment. 🧰 Tech Stack🔧 Languages & Frameworks Python (core), SQL, Bash PyTorch, TensorFlow, HuggingFace, scikit-learn, XGBoost, LightGBM 🧠 ML & AI Ecosystem Transformers, RNNs, CNNs, CRFs BERT, RoBERTa, GPT-style models OpenAI API, Cohere, LLaMA, Mistral, Anthropic Claude FAISS, Pinecone, Qdrant, LlamaIndex ☁️ Deployment & Infrastructure Docker, Kubernetes, GitHub Actions, Jenkins AWS (EC2, Lambda, S3, SageMaker), GCP, Azure Redis, PostgreSQL, MongoDB 📊 Monitoring & Experimentation MLflow, Weights & Biases, TensorBoard, Prometheus, Grafana 👨💼 Qualifications🎓 Education Bachelor’s or Master’s degree in CS, AI, Statistics, or related quantitative disciplines. Certifications in advanced ML, data science, or AI are a plus. 🧑💻 Experience 4–8 years of hands-on experience in applied machine learning. Demonstrated success in deploying models to production at scale. Deep familiarity with transformer-based architectures and model evaluation. ✅ You’ll Excel In This Role If You… Thrive on solving end-to-end ML problems — not just notebooks, but deployment, testing, and iteration. Obsess over clean, maintainable, reusable code and pipelines. Think from first principles and challenge model assumptions when they don’t work. Are deeply curious and have built multiple projects just because you wanted to know how something works. Are comfortable working with ambiguity, fast timelines, and real-time data challenges. Want to build AI products that get used by real people and drive revenue outcomes — not just vanity demos. 💼 What You’ll Get at Darwix AI Work with some of the brightest minds in AI , product, and design. Solve AI problems that push the boundaries of real-time, voice-first, multilingual enterprise use cases. Direct mentorship from senior architects and AI scientists. Competitive compensation (₹30L–₹45L CTC) + ESOPs + rapid growth trajectory. Opportunity to shape the future of a global-first AI startup built from India. Hands-on experience with the most advanced tech stack in applied ML and production AI. Front-row seat to a generational company that is redefining enterprise AI. 📩 How to Apply Ready to build with us? Send your resume, GitHub/portfolio, and a short write-up on: “What’s the most interesting ML system you’ve built — and what made it work?” Email: people@darwix.ai Subject: Senior ML Engineer – Application 🔐 Final Notes We value speed, honesty, and humility. We ship fast, fail fast, and learn even faster. This role is designed for high-agency, hands-on ML engineers who want to make a difference — not just write code. If you’re looking for a role where you own real impact , push technical boundaries, and work with a team that’s as obsessed with AI as you are — then Darwix AI is the place for you. Darwix AI – GenAI for Revenue Teams. Built from India, for the World.
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Position Summary We are seeking an Apache Hadoop - Subject Matter Expert (SME) who will be responsible for designing, optimizing, and scaling Spark-based data processing systems. This role involves hands-on experience in Spark architecture and core functionalities, focusing on building resilient, high-performance distributed data systems. You will collaborate with engineering teams to deliver high-throughput Spark applications and solve complex data challenges in real-time processing, big data analytics, and streaming. If you’re passionate about working in fast-paced, dynamic environments and want to be part of the cutting edge of data solutions, this role is for you. We’re Looking For Someone Who Can Design and optimize distributed Spark-based applications, ensuring low-latency, high-throughput performance for big data workloads. Troubleshooting: Provide expert-level troubleshooting for any data or performance issues related to Spark jobs and clusters. Data Processing Expertise: Work extensively with large-scale data pipelines using Spark's core components (Spark SQL, DataFrames, RDDs, Datasets, and structured streaming). Performance Tuning: Conduct deep-dive performance analysis, debugging, and optimization of Spark jobs to reduce processing time and resource consumption. Cluster Management: Collaborate with DevOps and infrastructure teams to manage Spark clusters on platforms like Hadoop/YARN, Kubernetes, or cloud platforms (AWS EMR, GCP Dataproc, etc.). Real-time Data: Design and implement real-time data processing solutions using Apache Spark Streaming or Structured Streaming. This role requires flexibility to work in rotational shifts, based on team coverage needs and customer demand. Candidates should be comfortable supporting operations in a 24x7 environment and willing to adjust working hours accordingly. What Makes You The Right Fit For This Position Expert in Apache Spark: In-depth knowledge of Spark architecture, execution models, and the components (Spark Core, Spark SQL, Spark Streaming, etc.) Data Engineering Practices: Solid understanding of ETL pipelines, data partitioning, shuffling, and serialization techniques to optimize Spark jobs. Big Data Ecosystem: Knowledge of related big data technologies such as Hadoop, Hive, Kafka, HDFS, and YARN. Performance Tuning and Debugging: Demonstrated ability to tune Spark jobs, optimize query execution, and troubleshoot performance bottlenecks. Experience with Cloud Platforms: Hands-on experience in running Spark clusters on cloud platforms such as AWS, Azure, or GCP. Containerization & Orchestration: Experience with containerized Spark environments using Docker and Kubernetes is a plus. Good To Have Certification in Apache Spark or related big data technologies. Experience working with Acceldata's data observability platform or similar tools for monitoring Spark jobs. Demonstrated experience with scripting languages like Bash, PowerShell, and Python. Familiarity with concepts related to application, server, and network security management. Possession of certifications from leading Cloud providers (AWS, Azure, GCP), and expertise in Kubernetes would be significant advantages.
Posted 1 week ago
10.0 - 16.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Staff Network Engineer - Cloud Networking | BGP Routing | CI/CD Pipeline | 10 - 16 y About Skyhigh Security: Skyhigh Security is a dynamic, fast-paced, cloud company that is a leader in the security industry. Our mission is to protect the world’s data, and because of this, we live and breathe security. We value learning at our core, underpinned by openness and transparency. Since 2011, organizations have trusted us to provide them with a complete, market-leading security platform built on a modern cloud stack. Our industry-leading suite of products radically simplifies data security through easy-to-use, cloud-based, Zero Trust solutions that are managed in a single dashboard, powered by hundreds of employees across the world. With offices in Santa Clara, Aylesbury, Paderborn, Bengaluru, Sydney, Tokyo and more, our employees are the heart and soul of our company. Skyhigh Security Is more than a company; here, when you invest your career with us, we commit to investing in you. We embrace a hybrid work model, creating the flexibility and freedom you need from your work environment to reach your potential. From our employee recognition program, to our ‘Blast Talks' learning series, and team celebrations (we love to have fun!), we strive to be an interactive and engaging place where you can be your authentic self. We are on these too! Follow us on LinkedIn and Twitter@SkyhighSecurity. Role Overview: We are seeking a Staff Network Engineer with deep expertise in managing global network infrastructure including internet exchange peering, IP transit circuits, secure web gateway pops. You will play a pivotal role in ensuring the performance, security, and scalability of our global hybrid cloud network spanning OCI, AWS, and on-prem Data Centers. This is a highly technical, hands-on role requiring experience with dealing with Tier 1 carriers, private peering, and highly automated Devops practices. About the Role: We are seeking a highly skilled and experienced professional to lead the determination and deployment of robust, stable, and manageable computing and network services. In this critical role, you will be responsible for managing technical projects and determining the overall site computing and network server strategy. You will actively research new technologies, products, and tools, continuously seeking innovative solutions to enhance our infrastructure. A core responsibility will involve monitoring and performing capacity/feasibility studies, alongside resolving network capacity issues to ensure seamless operations. You will also identify, develop, and deploy tools that directly support our network server services. Conduct impact analyses of new technologies on our existing environment, ensuring smooth integration. You will be expected to analyze and troubleshoot complex and advanced problems, demonstrating strong diagnostic and problem-solving abilities. Furthermore, you will be tasked with planning and scheduling work to meet established deadlines, ensuring the efficient completion of interrelated tasks. Finally, you will leverage your judgment in data analysis to develop and design solutions for moderately complex processes, contributing to the continuous improvement and evolution of our systems. Design, implement, and maintain internet exchange (IX) and direct peering relationships with global providers and partners. Manage and optimize secure web gateways and IP transit circuits to ensure low-latency and high-resiliency WAN connectivity. Collaborate with Tier 1 IP transit providers and IX's to provision, monitor, and troubleshoot high-capacity circuits. Maintain and optimize dynamic and static routing policies using BGP and open-source tools like BIRD. Lead troubleshooting efforts for WAN, routing, and switching issues across on-prem and cloud environments. Own the deployment and maintenance of hybrid network infrastructure across Oracle Cloud Infrastructure (OCI), Amazon Web Services (AWS), and onprem data centers. Collaborate with DevOps team members to integrate network automation and observability into CI/CD pipelines using Git, ArgoCD, Terraform. Perform blacklist/abuse removals and maintain network reputation and hygiene. Monitor network performance and user experience using tools like Kentik, SNMP, NetFlow/sFlow, and custom dashboards. Implement and maintain telemetry, logging, and alerting to support proactive infrastructure monitoring. About You: 10 - 16 years of experience in network engineering, with a strong focus on Internet connectivity, BGP routing, and WAN troubleshooting. Cloud networking expertise across OCI, AWS, and Azure, including hybrid network architectures. Experience with Kubernetes-based environments, network automation, and CI/CD pipelines. Proven experience in direct internet peering, IP transit negotiations, and IX connectivity. Deep understanding of routing protocols (BGP, OSPF), MPLS, VRFs, Virtual Chassis and switching technologies. Experience with ZTP on Juniper switches. Expertise in working with secure web gateways, DNS filtering, and network firewalls. Experience with open-source networking tools such as BIRD, FRR, or similar. Proficient in scripting (Python, Bash, or equivalent) for network automation. Experience managing user experience monitoring, flow monitoring (e.g., Kentik), and telemetry. Strong documentation, communication, and collaboration skills. Nice to Have: Network certifications (e.g., CCNP, JNCIP, AWS Advanced Networking). Experience with infrastructure-as-code (Terraform, Ansible). Familiarity with blackhole routing, DDoS mitigation strategies, and incident response. Company Benefits and Perks: We believe that the best solutions are developed by teams who embrace each other's unique experiences, skills, and abilities. We work hard to create a dynamic workforce where we encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement We're serious about our commitment to a workplace where everyone can thrive and contribute to our industry-leading products and customer support, which is why we prohibit discrimination and harassment based on race, color, religion, gender, national origin, age, disability, veteran status, marital status, pregnancy, gender expression or identity, sexual orientation or any other legally protected status.
Posted 1 week ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🚀 Job Title: Engineering Lead Company: Darwix AI Location: Gurgaon (On-site) Type: Full-Time Experience: 5–10 Years Compensation: Competitive + Performance-based incentives + Meaningful ESOPs 🧠 About Darwix AI Darwix AI is one of India’s fastest-growing AI startups, building the future of enterprise revenue intelligence. We offer a GenAI-powered conversational intelligence and real-time agent assist suite that transforms how large sales teams interact, close deals, and scale operations. We’re already live with enterprise clients across India, the UAE, and Southeast Asia , and our platform enables multilingual speech-to-text, AI-driven nudges, and contextual conversation coaching—backed by our proprietary LLMs and cutting-edge voice infrastructure. With backing from top-tier VCs and over 30 angel investors, we’re now hiring an Engineering Lead who can architect, own, and scale the core engineering stack as we prepare for 10x growth. 🌟 Role Overview As the Engineering Lead at Darwix AI , you’ll take ownership of our platform architecture, product delivery, and engineering quality across the board. You’ll work closely with the founders, product managers, and the AI team to convert fast-moving product ideas into scalable features. You will: Lead backend and full-stack engineers across microservices, APIs, and real-time pipelines Architect scalable systems for AI/LLM deployments Drive code quality, maintainability, and engineering velocity This is a hands-on, player-coach role —perfect for someone who loves building but is also excited about mentoring and growing a technical team. 🎯 Key Responsibilities🛠️ Technical Leadership Own technical architecture across backend, frontend, and DevOps stacks Translate product roadmaps into high-performance, production-ready systems Drive high-quality code reviews, testing practices, and performance optimization Make critical system-level decisions around scalability, security, and reliability 🚀 Feature Delivery Work with the product and AI teams to build new features around speech recognition, diarization, real-time coaching, and analytics dashboards Build and maintain backend services for data ingestion, processing, and retrieval from Vector DBs, MySQL, and MongoDB Create clean, reusable APIs (REST & WebSocket) that power our web-based agent dashboards 🧱 System Architecture Refactor monoliths into microservice-based architecture Optimize real-time data pipelines with Redis, Kafka, and async queues Implement serverless modules using AWS Lambda, Docker containers, and CI/CD pipelines 🧑🏫 Mentorship & Team Building Lead a growing team of engineers—guide on architecture, code design, and performance tuning Foster a culture of ownership, documentation, and continuous learning Mentor junior developers, review PRs, and set up internal coding best practices 🔄 Collaboration Act as the key technical liaison between Product, Design, AI/ML, and DevOps teams Work directly with founders on roadmap planning, delivery tracking, and go-live readiness Contribute actively to investor tech discussions, client onboarding, and stakeholder calls ⚙️ Our Tech Stack Languages: Python (FastAPI, Django), PHP (legacy support), JavaScript, TypeScript Frontend: HTML, CSS, Bootstrap, Mustache templates; (React.js/Next.js optional) AI/ML Integration: LangChain, Whisper, RAG pipelines, Transformers, Deepgram, OpenAI APIs Databases: MySQL, PostgreSQL, MongoDB, Redis, Pinecone/FAISS (Vector DBs) Cloud & Infra: AWS EC2, S3, Lambda, CloudWatch, Docker, GitHub Actions, Nginx DevOps: Git, Docker, CI/CD pipelines, Jenkins/GitHub Actions, load testing Tools: Jira, Notion, Slack, Postman, Swagger 🧑💼 Who You Are 5–10 years of professional experience in backend/full-stack development Proven experience leading engineering projects or mentoring junior devs Comfortable working in high-growth B2B SaaS startups or product-first orgs Deep expertise in one or more backend frameworks (Django, FastAPI, Laravel, Flask) Experience working with AI products or integrating APIs from OpenAI, Deepgram, HuggingFace is a huge plus Strong understanding of system design, DB normalization, caching strategies, and latency optimization Bonus: exposure to working with voice pipelines (STT/ASR), NLP models, or real-time analytics 📌 Qualities We’re Looking For Builder-first mindset – you love launching features fast and scaling them well Execution speed – you move with urgency but don’t break things Hands-on leadership – you guide people by writing code, not just processes Problem-solver – when things break, you own the fix and the root cause Startup hunger – you thrive on chaos, ambiguity, and shipping weekly 🎁 What We Offer High Ownership : Directly shape the product and its architecture from the ground up Startup Velocity : Ship fast, learn fast, and push boundaries Founding Engineer Exposure : Work alongside IIT-IIM-BITS founders with full transparency Compensation : Competitive salary + meaningful equity + performance-based incentives Career Growth : Move into an EM/CTO-level role as the org scales Tech Leadership : Own features end-to-end—from spec to deployment 🧠 Final Note This is not just another engineering role. This is your chance to: Own the entire backend for a GenAI product serving global enterprise clients Lead technical decisions that define our future infrastructure Join the leadership team at a startup that’s shipping faster than anyone else in the category If you're ready to build a product with 10x potential, join a high-output team, and be the reason why the tech doesn’t break at scale , this role is for you. 📩 How to Apply Send your resume to people@darwix.ai with the subject line: “Application – Engineering Lead – [Your Name]” Attach: Your latest CV or LinkedIn profile GitHub/portfolio link (if available) A short note (3–5 lines) on why you're excited about Darwix AI and this role
Posted 1 week ago
1.0 years
0 Lacs
Greater Nashik Area
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Junior Data Scientist Location: Bangalore Reporting to: Senior Manager – Analytics Purpose of the role The Global GenAI Team at Anheuser-Busch InBev (AB InBev) is tasked with constructing competitive solutions utilizing GenAI techniques. These solutions aim to extract contextual insights and meaningful information from our enterprise data assets. The derived data-driven insights play a pivotal role in empowering our business users to make well-informed decisions regarding their respective products. In the role of a Machine Learning Engineer (MLE), you will operate at the intersection of: LLM-based frameworks, tools, and technologies Cloud-native technologies and solutions Microservices-based software architecture and design patterns As an additional responsibility, you will be involved in the complete development cycle of new product features, encompassing tasks such as the development and deployment of new models integrated into production systems. Furthermore, you will have the opportunity to critically assess and influence the product engineering, design, architecture, and technology stack across multiple products, extending beyond your immediate focus. Key tasks & accountabilities Large Language Models (LLM): Experience with LangChain, LangGraph Proficiency in building agentic patterns like ReAct, ReWoo, LLMCompiler Multi-modal Retrieval-Augmented Generation (RAG): Expertise in multi-modal AI systems (text, images, audio, video) Designing and optimizing chunking strategies and clustering for large data processing Streaming & Real-time Processing: Experience in audio/video streaming and real-time data pipelines Low-latency inference and deployment architectures NL2SQL: Natural language-driven SQL generation for databases Experience with natural language interfaces to databases and query optimization API Development: Building scalable APIs with FastAPI for AI model serving Containerization & Orchestration: Proficient with Docker for containerized AI services Experience with orchestration tools for deploying and managing services Data Processing & Pipelines: Experience with chunking strategies for efficient document processing Building data pipelines to handle large-scale data for AI model training and inference AI Frameworks & Tools: Experience with AI/ML frameworks like TensorFlow, PyTorch Proficiency in LangChain, LangGraph, and other LLM-related technologies Prompt Engineering: Expertise in advanced prompting techniques like Chain of Thought (CoT) prompting, LLM Judge, and self-reflection prompting Experience with prompt compression and optimization using tools like LLMLingua, AdaFlow, TextGrad, and DSPy Strong understanding of context window management and optimizing prompts for performance and efficiency Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) Bachelor's or masterʼs degree in Computer Science, Engineering, or a related field. Previous Work Experience Required Proven experience of 1+ years in developing and deploying applications utilizing Azure OpenAI and Redis as a vector database. Technical Skills Required Solid understanding of language model technologies, including LangChain, OpenAI Python SDK, LammaIndex, OLamma, etc. Proficiency in implementing and optimizing machine learning models for natural language processing. Experience with observability tools such as mlflow, langsmith, langfuse, weight and bias, etc. Strong programming skills in languages such as Python and proficiency in relevant frameworks. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). And above all of this, an undying love for beer! We dream big to create future with more cheer
Posted 1 week ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Position Overview We are hiring a Lead AI Data Scientist to join our client's Ahmedabad-based team. Collaborating closely with our leadership, you'll develop innovative applications and structured processes that enhance productivity and expand market reach. Key Responsibilities Lead the development of sophisticated machine learning and deep learning models that solve complex business problems. Manage timelines, scope, and deliverables for AI/data science projects, ensuring projects are completed on time and meet quality standards. Design and implement machine learning models and AI applications tailored for internal and customer-focused solutions. Perform XML regression analysis and build robust C++ pipelines for data processing. Continuously optimize models for performance, accuracy, and scalability, considering real-world constraints such as latency and interpretability. Develop tools to optimize personal and company-wide productivity. Collaborate with cross-functional teams to identify automation opportunities. Oversee the data pipeline and infrastructure, ensuring data quality, consistency, and accessibility for model development. Analyze complex data sets to provide actionable insights. Participate in strategic brainstorming sessions to align AI initiatives with the company’s vision. Qualifications Bachelor’s or master’s degree in data science, Computer Science, or a related field. Expertise in XML regression analysis and C++ programming. Familiarity with machine learning frameworks (e.g., TensorFlow, PyTorch). Strong problem-solving and analytical skills. Passion for AI and belief in the transformative power of systems and processes. Proactive attitude and excellent communication skills. Benefits Opportunity to work on groundbreaking AI technologies. Collaborative, innovative, and supportive work environment. Competitive salary and benefits package. Career growth opportunities in a fast-growing company. Skills: data science,c++ programming,data analysis,data pipeline management,xml regression analysis,pytorch,machine learning,tensorflow,deep learning,c++
Posted 1 week ago
15.0 years
0 Lacs
Gurgaon
On-site
Project Role : Tech Delivery Subject Matter Expert Project Role Description : Drive innovative practices into delivery, bring depth of expertise to a delivery engagement. Sought out as experts, enhance Accentures marketplace reputation. Bring emerging ideas to life by shaping Accenture and client strategy. Use deep technical expertise, business acumen and fluid communication skills, work directly with a client in a trusted advisor relationship to gather requirements to analyze, design and/or implement technology best practice business changes. Must have skills : 5G Wireless Networks & Technologies Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Job Title: 5G Core Network Ops Specialist Summary: We are seeking a skilled 5G Core Network Specialist to join our team. The ideal candidate will have extensive experience with Nokia 5G Core platforms and will be responsible for fault handling, troubleshooting, session and service investigation, configuration review, performance monitoring, security support, change management, and escalation coordination. Roles and Responsibilities: 1. Fault Handling & Troubleshooting: • Provide Level 2 (L2) support for 5G Core SA network functions in production environment. • Analyze alarms from NetAct/Mantaray, or external monitoring tools. • Correlate events using Netscout, Mantaray, and PM/CM data. • Troubleshoot and resolve complex issues related to registration, session management, mobility, policy, charging, DNS, IPSec and Handover issues. • Handle node-level failures (AMF/SMF/UPF/NRF/UDM/UDR/SDL/PCF/CHF/Flowone, Nokia EDR restarts, crashes, overload). • Handle troubleshooting on 5G Core Database, UDM, UDR, SDL, Provisioning, Flowone, CHF(Charging), PCF(Policy). • Perform packet tracing (Wireshark) or core trace (PCAP, logs) and Nokia PCMD trace capturing and analysis. • Perform root cause analysis (RCA) and implement corrective actions. • Handle escalations from Tier-1 support and provide timely resolution. 2. Session & Service Investigation: • Trace subscriber issues (5G attach, PDU session, QoS). • Use tools like EDR, Flow Tracer, Nokia Cloud Operations Manager (COM). • Correlate user-plane drops, abnormal release, bearer QoS mismatch. • Work on Preventive measures with L1 team for health check & backup. 3. Configuration and Change Management: • Create a MOP for required changes, validate MOP with Ops teams, stakeholders before rollout/implementation. • Maintain detailed documentation of network configurations, incident reports, and operational procedures. • Support software upgrades, patch management, and configuration changes. • Maintain documentation for known issues, troubleshooting guides, and standard operating procedures (SOPs). • Audit NRF/PCF/UDM etc configuration & Database. • Validate policy rules, slicing parameters, and DNN/APN settings. • Support integration of new 5G Core nodes and features into the live network. 4. Performance Monitoring: • Use KPI dashboards (NetAct/NetScout) to monitor 5G Core KPIs e.g registration success rate, PDU session setup success, latency, throughput, user-plane utilization. • Proactively detect degrading KPIs trends. 5. Security & Access Support: • Application support for Nokia EDR and CrowdStrike. • Assist with certificate renewal, firewall/NAT issues, and access failures. 6. Escalation & Coordination: • Escalate unresolved issues to L3 teams, TAC, OSS/Core engineering. • Work with L3 and care team for issue resolution. • Ensure compliance with SLAs and contribute to continuous service improvement. 7. Reporting • Generate daily/weekly/monthly reports on network performance, incident trends, and SLA compliance. Technical Experience and Professional Attributes: • 8–12 years of experience in Telecom industry with hands on experience in 5G Core. • Mandatory experience with Nokia 5G Core-SA platform. • Solid understanding for 5G Core Packet Core Network Protocol such as N1, N2, N3, N6, N7, N8, 5G Core interfaces, GTP-C/U, HTTPS and including ability to trace, debug the issues. • Hands-on experience with 5GC components: AMF, SMF, UPF, NRF, AUSF, NSSF, UDM, PCF, CHF, UDR, SDL, Nokia EDR, Provisioning and Flowone. • Troubleshooting and configuration hands on experience on 5G Core Database, UDM, UDR, SDL, Provisioning, Flowone, CHF(Charging), PCF(Policy). • In-depth understanding of 3GPP call flows for 5G-SA, 5G NSA, Call routing, number analysis, system configuration, call flow, Data roaming, configuration and knowledge of Telecom standards e.g. 3GPP, ITU-T and ANSI. • Familiarity with policy control mechanisms, QoS enforcement, and charging models (event-based, session-based). • Hands-on experience with Diameter, HTTP/2, REST APIs, and SBI interfaces. • Strong analytical and troubleshooting skills. • Proficiency in monitoring and tracing tools (NetAct, NetScout, PCMD tracing). And log management systems (e.g., Prometheus, Grafana). • Knowledge of network protocols and security (TLS, IPsec). • Excellent communication and documentation skills. Educational Qualification: • BE / BTech • 15 Years Full Time Education Additional Information: • Nokia certifications (e.g., NCOM, NCS, NSP, Kubernetes). • Experience in Nokia Platform 5G Core, NCOM, NCS, Nokia Private cloud and Public Cloud (AWS preferred), cloud-native environments (Kubernetes, Docker, CI/CD pipelines). • Cloud Certifications (AWS)/ Experience on AWS Cloud 15 years full time education
Posted 1 week ago
15.0 years
0 Lacs
Gurgaon
On-site
Project Role : Engineering Services Practitioner Project Role Description : Assist with end-to-end engineering services to develop technical engineering solutions to solve problems and achieve business objectives. Solve engineering problems and achieve business objectives using scientific, socio-economic, technical knowledge and practical experience. Work across structural and stress design, qualification, configuration and technical management. Must have skills : 5G Wireless Networks & Technologies Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Job Title: 5G Core Network Ops Senior Engineer Summary: We are seeking a skilled 5G Core Network Senior Engineer to join our team. The ideal candidate will have extensive experience with Nokia 5G Core platforms and will be responsible for fault handling, troubleshooting, session and service investigation, configuration review, performance monitoring, security support, change management, and escalation coordination. Roles and Responsibilities: 1. Fault Handling & Troubleshooting: • Provide Level 2 (L2) support for 5G Core SA network functions in production environment. • Nokia EDR Operations & Support, Monitor and maintain the health of Nokia EDR systems. • Perform log analysis and troubleshoot issues related to EDR generation, parsing, and delivery. • Ensure EDRs are correctly generated for all relevant 5G Core functions (AMF, SMF, UPF, etc.) and interfaces (N4, N6, N11, etc.). • Validate EDR formats and schemas against 3GPP and Nokia specifications. • NCOM Platform Operations Operate and maintain the Nokia Cloud Operations Manager (NCOM) platform. • Manage lifecycle operations of CNFs, VNFs, and network services (NSs) across distributed Kubernetes and OpenStack environments. • Analyze alarms from NetAct/Mantaray, or external monitoring tools. • Correlate events using Netscout, Mantaray, and PM/CM data. • Troubleshoot and resolve complex issues related to registration, session management, mobility, policy, charging, DNS, IPSec and Handover issues. • Handle node-level failures (AMF/SMF/UPF/NRF/UDM/UDR/PCF/CHF restarts, crashes, overload). • Perform packet tracing (Wireshark) or core trace (PCAP, logs) and Nokia PCMD trace capturing and analysis. • Perform root cause analysis (RCA) and implement corrective actions. • Handle escalations from Tier-1 support and provide timely resolution. 2. Automation & Orchestration • Automate deployment, scaling, healing, and termination of network functions using NCOM. • Develop and maintain Ansible playbooks, Helm charts, and GitOps pipelines (FluxCD, ArgoCD). • Integrate NCOM with third-party systems using open APIs and custom plugins. 3. Session & Service Investigation: • Trace subscriber issues (5G attach, PDU session, QoS). • Use tools like EDR, Flow Tracer, Nokia Cloud Operations Manager (COM). • Correlate user-plane drops, abnormal release, bearer QoS mismatch. • Work on Preventive measures with L1 team for health check & backup. 4. Configuration and Change Management: • Create a MOP for required changes, validate MOP with Ops teams, stakeholders before rollout/implementation. • Maintain detailed documentation of network configurations, incident reports, and operational procedures. • Support software upgrades, patch management, and configuration changes. • Maintain documentation for known issues, troubleshooting guides, and standard operating procedures (SOPs). • Audit NRF/PCF/UDM etc configuration & Database. • Validate policy rules, slicing parameters, and DNN/APN settings. • Support integration of new 5G Core nodes and features into the live network. 5. Performance Monitoring: • Use KPI dashboards (NetAct/NetScout) to monitor 5G Core KPIs e.g registration success rate, PDU session setup success, latency, throughput, user-plane utilization. • Proactively detect degrading KPIs trends. 6. Security & Access Support: • Application support for Nokia EDR and CrowdStrike. • Assist with certificate renewal, firewall/NAT issues, and access failures. 7. Escalation & Coordination: • Escalate unresolved issues to L3 teams, Nokia TAC, OSS/Core engineering. • Work with L3 and care team for issue resolution. • Ensure compliance with SLAs and contribute to continuous service improvement. 8. Reporting • Generate daily/weekly/monthly reports on network performance, incident trends, and SLA compliance. Technical Experience and Professional Attributes: • 5–9 years of experience in Telecom industry with hands on experience. • Mandatory experience with Nokia 5G Core-SA platform. • Handson Experience on Nokia EDR Operations & Support, Monitor and maintain the health of Nokia EDR systems. • Perform log analysis and troubleshoot issues related to EDR generation, parsing, and delivery. • Experience on NCOM Platform Operations Operate and maintain the Nokia Cloud Operations Manager (NCOM) platform • NF deployment and troubleshooting experience on deployment, scaling, healing, and termination of network functions using NCOM. • Solid understanding for 5G Core Packet Core Network Protocol such as N1, N2, N3, N6, N7, N8, 5G Core interfaces, GTP-C/U, HTTPS and including ability to trace, debug the issues. • Hands-on experience with 5GC components: AMF, SMF, UPF, NRF, AUSF, NSSF, UDM, PCF, CHF, SDL, NEDR, Provisioning and Flowone. • In-depth understanding of 3GPP call flows for 5G-SA, 5G NSA, Call routing, number analysis, system configuration, call flow, Data roaming, configuration and knowledge of Telecom standards e.g. 3GPP, ITU-T and ANSI. • Familiarity with policy control mechanisms, QoS enforcement, and charging models (event-based, session-based). • Hands-on experience with Diameter, HTTP/2, REST APIs, and SBI interfaces. • Strong analytical and troubleshooting skills. • Proficiency in monitoring and tracing tools (NetAct, NetScout, PCMD tracing). And log management systems (e.g., Prometheus, Grafana). • Knowledge of network protocols and security (TLS, IPsec). • Excellent communication and documentation skills. Educational Qualification: • BE / BTech • 15 Years Full Time Education Additional Information: • Nokia certifications (e.g., NCOM, NCS, NSP, Kubernetes). • Experience in Nokia Platform 5G Core, NCOM, NCS, Nokia Private cloud and Public Cloud (AWS preferred), cloud-native environments (Kubernetes, Docker, CI/CD pipelines). • Cloud Certifications (AWS)/ Experience on AWS Cloud 15 years full time education
Posted 1 week ago
5.0 years
19 - 20 Lacs
Chennai, Tamil Nadu, India
On-site
Position Title: Senior Software Engineer 34332 Location: Chennai (Onsite) Job Type: Contract Budget: ₹20 LPA Notice Period: Immediate Joiners Only Role Overview We are looking for a highly skilled Senior Software Engineer to be a part of a centralized observability and monitoring platform team. The role focuses on building and maintaining a scalable, reliable observability solution that enables faster incident response and data-driven decision-making through latency, traffic, error, and saturation monitoring. This opportunity requires a strong background in cloud-native architecture, observability tooling, backend and frontend development, and data pipeline engineering. Key Responsibilities Design, build, and maintain observability and monitoring platforms to enhance MTTR/MTTX Create and optimize dashboards, alerts, and monitoring configurations using tools like Prometheus, Grafana, etc. Architect and implement scalable data pipelines and microservices for real-time and batch data processing Utilize GCP tools including BigQuery, Dataflow, Dataproc, Data Fusion, and others Develop end-to-end solutions using Spring Boot, Python, Angular, and REST APIs Design and manage relational and NoSQL databases including PostgreSQL, MySQL, and BigQuery Implement best practices in data governance, RBAC, encryption, and security within cloud environments Ensure automation and reliability through CI/CD, Terraform, and orchestration tools like Airflow and Tekton Drive full-cycle SDLC processes including design, coding, testing, deployment, and monitoring Collaborate closely with software architects, DevOps, and cross-functional teams for solution delivery Core Skills Required Proficiency in Spring Boot, Angular, Java, and Python Experience in developing microservices and SOA-based systems Cloud-native development experience, preferably on Google Cloud Platform (GCP) Strong understanding of HTML, CSS, JavaScript/TypeScript, and modern frontend frameworks Experience with infrastructure automation and monitoring tools Working knowledge of data engineering technologies: PySpark, Airflow, Apache Beam, Kafka, and similar Strong grasp of RESTful APIs, GitHub, and TDD methodologies Preferred Skills GCP Professional Certifications (e.g., Data Engineer, Cloud Developer) Hands-on experience with Terraform, Cloud SQL, Data Governance tools, and security frameworks Exposure to performance tuning, cost optimization, and observability best practices Experience Required 5+ years of experience in full-stack and cloud-based application development Strong track record in building distributed, scalable systems Prior experience with observability and performance monitoring tools is a plus Educational Qualifications Bachelor’s Degree in Computer Science, Information Technology, or a related field (mandatory) Skills: java,data fusion,html,dataflow,terraform,spring boot,restful apis,python,angular,dataproc,microservices,apache beam,css,cloud sql,soa,typescript,tdd,kafka,javascript,airflow,github,pyspark,bigquery,,gcp
Posted 1 week ago
1.0 - 4.0 years
1 - 7 Lacs
India
On-site
S.N Question Description 1 Position Vacant Front-end Developers 2 Organization Name Gritsys Technologies Private Limited 3 Company Profile Gritsys is a new generation information technology services & solutions firm driven to create business value for our customers through the innovative integration of advanced technologies. Clients worldwide benefit from our agile approach toward implementing and sustaining global technology solutions. The company was founded by a team of enthusiastic IT and business specialists who wanted to overcome the routine and create a company that would act in the market not only for technology success but for the business value creation and value addition. Thus, the mission of the company was defined – to contribute to forward looking transformation of the society through software development. 4 Job Description/Responsibilities - Design and develop high-volume, low-latency applications for mission-critical systems, delivering high-availability and performance. - Contribute in all phases of the development lifecycle. - Write well designed, testable, efficient code. - Ensure designs are in compliance with specifications. - Prepare and produce releases of software components. - Support continuous improvement by investigating alternatives and technologies and presenting these for architectural review. Desired Profile of the Candidate Required Skills: Need strong knowledge in Angular 14 and above (Angular is Must) Hands on experience in typescript Ionic and Java added advantage Bootstrap 4 and above JavaScript Frameworks and CSS Proficient. Ability to work both independently and in a team-oriented environment (Agile team experience a plus) 6 Minimum exp: Maximum exp: 1 to 4 Years 8 Compensation Offered Based on the experience and skills 9 Location/s Chennai 10 Level of Job Software Developer 11 UG Qualification Any 12 PG Qualification Any 13 Industry Type Software product development and business consulting 14 Contact information e-mail info@Gritsys.com Job Type: Full-time Pay: ₹15,000.00 - ₹60,000.00 per month Work Location: In person
Posted 1 week ago
6.0 years
0 Lacs
Andhra Pradesh, India
On-site
We are seeking a Senior Developer with expertise in SnapLogic and Apache Airflow to design, develop, and maintain enterprise-level data integration solutions. This role requires strong technical expertise in ETL development, workflow orchestration, and cloud technologies. You will be responsible for automating data workflows, optimizing performance, and ensuring the reliability and scalability of our data systems. Key Responsibilities include designing, developing, and managing ETL pipelines using SnapLogic, ensuring efficient data transformation and integration across various systems and applications. Leverage Apache Airflow for workflow automation, job scheduling, and task dependencies, ensuring optimized execution and monitoring. Work closely with cross-functional teams such as Data Engineering, DevOps, and Data Science to understand data requirements and deliver solutions. Collaborate in designing and implementing data pipeline architectures to support large-scale data processing in cloud environments like AWS, Azure, and GCP. Develop reusable SnapLogic pipelines and integrate with third-party applications and data sources including databases, APIs, and cloud services. Optimize SnapLogic pipeline performance to handle large volumes of data with minimal latency. Provide guidance and mentoring to junior developers in the team, conducting code reviews and offering best practice recommendations. Troubleshoot and resolve pipeline failures, ensuring high data quality and minimal downtime. Implement automated testing, continuous integration (CI), and continuous delivery (CD) practices for data pipelines. Stay current with new SnapLogic features, Airflow upgrades, and industry best practices. Required Skills & Experience include 6+ years of hands-on experience in data engineering, focusing on SnapLogic and Apache Airflow. Strong experience with SnapLogic Designer and SnapLogic cloud environment for building data integrations and ETL pipelines. Proficient in Apache Airflow for orchestrating, automating, and scheduling data workflows. Strong understanding of ETL concepts, data integration, and data transformations. Experience with cloud platforms like AWS, Azure, or Google Cloud and data storage systems such as S3, Azure Blob, and Google Cloud Storage. Strong SQL skills and experience with relational databases like PostgreSQL, MySQL, Oracle, and NoSQL databases. Experience working with REST APIs, integrating data from third-party services, and using connectors. Knowledge of data quality, monitoring, and logging tools for production pipelines. Experience with CI/CD pipelines and tools such as Jenkins, GitLab, or similar. Excellent problem-solving skills with the ability to diagnose issues and implement effective solutions. Ability to work in an Agile development environment. Strong communication and collaboration skills to work with both technical and non-technical teams.
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Experience- 7+ years Location- Hyderabad (preferred), Pune, Mumbai JD- We are seeking a skilled Snowflake Developer with 7+ years of experience in designing, developing, and optimizing Snowflake data solutions. The ideal candidate will have strong expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration. This role involves building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Key Responsibilities 1. Snowflake Development & Optimization Design and develop Snowflake databases, schemas, tables, and views following best practices. Write complex SQL queries, stored procedures, and UDFs for data transformation. Optimize query performance using clustering, partitioning, and materialized views. Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks). 2. Data Pipeline Development Build and maintain ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. Integrate Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). Develop CDC (Change Data Capture) and real-time data processing solutions. 3. Data Modeling & Warehousing Design star schema, snowflake schema, and data vault models in Snowflake. Implement data sharing, secure views, and dynamic data masking. Ensure data quality, consistency, and governance across Snowflake environments. 4. Performance Tuning & Troubleshooting Monitor and optimize Snowflake warehouse performance (scaling, caching, resource usage). Troubleshoot data pipeline failures, latency issues, and query bottlenecks. Work with DevOps teams to automate deployments and CI/CD pipelines. 5. Collaboration & Documentation Work closely with data analysts, BI teams, and business stakeholders to deliver data solutions. Document data flows, architecture, and technical specifications. Mentor junior developers on Snowflake best practices. Required Skills & Qualifications · 7+ years in database development, data warehousing, or ETL. · 4+ years of hands-on Snowflake development experience. · Strong SQL or Python skills for data processing. · Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). · Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). · Certifications: SnowPro Core Certification (preferred). Preferred Skills · Familiarity with data governance and metadata management. · Familiarity with DBT, Airflow, SSIS & IICS · Knowledge of CI/CD pipelines (Azure DevOps).
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Come work at a place where innovation and teamwork come together to support the most exciting missions in the world! Description: We are seeking a talented Lead Software Engineer – Performance to deliver roadmap features of Enterprise TruRisk Platform which would help customers to Measure, Communicate and Eliminate Cyber Risks. You will lead the performance engineering efforts across Spark, Kafka, Elasticsearch, and Middleware APIs, ensuring that our real-time data pipelines and services meet enterprise-grade SLAs. As part of our high-performing engineering team, you will design and execute performance testing strategies, identify system bottlenecks, and work with development teams to implement performance improvements that support billions of cyber security events processing a day across our data platform. Responsibilities: Own the performance strategy across distributed systems which includes Hadoop, Spark, Kafka, Elasticsearch/OpenSearch, Big Data Components and APIs for each release. Define, develop, and execute performance test plans, load tests, stress tests, and soak tests. Create realistic performance test scenarios for data pipelines and microservices based on production-like workloads. Proactively identify bottlenecks, resource contention, and latency issues using tools such as JMeter, Spark UI, Kafka Manager, Elastic Monitoring and App Dynamics. Provide deep-dive analysis and recommendations on tuning and scaling Spark jobs, Kafka topics/partitions, ES queries, and API endpoints. Collaborate with developers, architects, and infrastructure teams to integrate performance feedback into design and implementation. Simulate and benchmark real-time and batch data flow at scale using synthetic and production-like datasets and own this framework end to end for synthetic data generator. Lead the initiative to build a performance testing framework that integrates with CI/CD pipelines. Establish and track SLAs for throughput, latency, CPU/memory utilization and Garbage collection. Create performance dashboards and visualization using Prometheus/Grafana, Kibana, or equivalent. Document performance test findings and create technical reports for leadership and engineering teams. Recommend performance optimization to Dev and Platform groups. Responsible for optimizing the overall cost. Contribute to feature development and fixes apart from performance benchmarking. Qualifications: Bachelor's degree in computer science, Engineering, or related field. 8+ years of overall experience in distributed systems and backend performance engineering. 4+ years of JAVA development experience with Microservices architecture. Proficient in scripting (Python, Bash) for automation and test data generation. 4+ years of hands-on experience with Apache Spark – performance tuning, memory management, and DAG optimization. 3+ years of experience with Kafka – topic optimization, producer/consumer tuning, and lag monitoring. 3+ years of experience with Elasticsearch/OpenSearch – query profiling, indexing strategies, and cluster optimization. 3+ years of experience with performance testing tools such as JMeter or similar. Excellent programming and designing skills and Hands-on experience on Spring, Hibernate. Deep understanding of middleware and microservices performance including REST APIs. Strong knowledge of profiling, debugging, and observability tools (e.g., Spark UI, Athena, Grafana, ELK). Experience designing and running benchmarks at scale for high-throughput environments in PBs. Experience with containerized workloads and performance testing in Kubernetes/Docker environments. Solid understanding of cloud-native architecture (OCI) and distributed systems design. Strong knowledge of Linux operating systems and performance related improvements. Familiarity with CI/CD integration for performance testing (e.g., Jenkins, GitHub). Knowledge of data lake architecture, caching solutions, and message queues. Strong communication skills and experience influencing cross-functional engineering teams. Additional Plus Competencies: Prior experience in any analytics platform on Big Data would be a huge plus.
Posted 1 week ago
10.0 years
0 Lacs
India
Remote
Job Type: Contractual - Full-time (3-5 months) - Very high chances to a full-time transition. Location: Remote About us: At Kan’s Technologies Global, our mission is to develop the most advanced solutions, products, and platforms that drive business growth, solve real-world challenges, and catalyze innovation across industries worldwide. We specialize in building custom AI/NLP/ML solutions, including Conversational AI, Predictive Analytics, Demand Forecasting, and more empowering organizations to stay ahead in an increasingly digital world. What fuels us? Innovation and Value Creation Role Summary: We are seeking a dynamic and experienced Project Manager with a strong background in managing cross-functional technology teams within the healthcare domain. The ideal candidate will be responsible for overseeing the end-to-end planning, execution, and delivery of healthcare IT projects. This role involves leading a team comprising UI/UX designers, Machine Learning engineers, and Fullstack + Backend developers to ensure timely delivery, quality assurance, and alignment with client and organizational goals. The candidate should bring a structured and proactive approach, leveraging modern project management tools and healthcare domain knowledge to drive success. Roles & Responsibilities: Collaborate with product and engineering teams to identify, extract, and engineer relevant features from diverse datasets to enhance model performance. Research and recommend appropriate machine learning algorithms and statistical techniques for prediction, classification, and optimization tasks within the healthcare domain. Deploy trained models as scalable, low-latency inference endpoints using Azure Machine Learning. Work with DevOps to establish robust MLOps practices for model versioning, monitoring, and continuous retraining. Work closely with Full-Stack and DevOps Engineers to ensure efficient data ingestion from various sources into Azure databases. Perform exploratory data analysis (EDA) to uncover insights, identify trends, and understand data quality. Present findings and model performance clearly to technical and non-technical stakeholders. Monitor the performance and drift of deployed models, ensuring their continued accuracy and relevance. Recommend and implement strategies for model retraining and improvement. Actively contribute to the rapid development and iteration of the MVP, ensuring the ML components are robust, performant, and deliver tangible value. Maintain clear and comprehensive documentation for models, data pipelines, and algorithms. Mandatory Qualifications: Bachelor's degree in Computer Science, IT, Engineering, Healthcare Informatics, or related field Minimum 10 years of project management experience At least 3 years of experience in healthcare or healthtech domain Experience managing cross-functional tech teams (UI/UX, ML, Fullstack + Backend) Proficiency in Agile/Scrum methodologies Hands-on with Jira, Confluence, Slack, Trello/ClickUp, Figma, GitHub/GitLab, Google Workspace Strong communication skills (written and verbal) Strong documentation and reporting skills Preferred Qualifications: PMP, PRINCE2, or Scrum Master (CSM) certification Knowledge of HIPAA, HL7, FHIR, or GDPR in healthcare Basic technical understanding of APIs, ML models, UI/UX workflows, and DevOps, Microsoft Azure Experience in delivering healthcare IT projects (EHR/EMR, patient platforms) Direct client-facing experience in tech project delivery Comfortable managing teams across different time zones What you'll gain: Opportunity to work with international clients and real-world industry problems. Collaborative environment with a team focused on innovation and impact. Performance-based incentives and career growth opportunities. Flexible work structure (remote-first culture). Note: (Compensation Range: upto 13 Lakhs INR per annum), based on experience. At Kan's Technologies Global, we are committed to fostering a diverse and inclusive environment, where everyone is treated with respect and given equal opportunities, regardless of race, gender, religion, or any other characteristic. Discrimination of any kind is not tolerated, and we strive to create a workplace where all individuals can thrive.
Posted 1 week ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Open positons 1️⃣Engineering -> Full Stack Engineer +*]:mt-5 whitespace-pre-wrap"> The Dabba Network is a De centralised P hysical I nfrastructure N etwork that is building high speed wireless networks from the ground up in India. We enable humans & machines with internet connectivity at a price that they can afford. Internet and WiFi are critical to every human. Dabba is building the software, hardware & operating solutions to make internet access ubiquitous & affordable. We believe every household should come with internet, networking, and WiFi; just like water, gas, and electricity, the internet is a fundamental utility. Unlike traditional internet solutions, Dabba's approach is vertically-integrated across engineering and operations. We design and manufacture our own hardware, develop the entire software stack, and manage our users’ network. We are looking for people who are excited to work on complex problems, enjoy learning and working with others, and above all, are resilient and empathic. Join & help us build something amazing together for the next billion users. About The Role Our customers chose us for a simple, seamless network experience. Our Software Engineering team is critical to meeting this expectation. The team is small but growing, and as an early member you will have a large impact in forming the foundation of the Software Engineering team and role. As a Full Stack Engineer, you will have impact by: Helping to build Dabba software from the ground up Crafting experiences and tools for both internal and external users Running UI experiments to craft better experience for our customers Defining how explorer data flows through the system You might be a good fit if you Are comfortable working across the stack but particularly enjoy backend or frontend work Architecting, developing, and testing full-stack code end-to-end Experience turning designs into polished products An understanding on how important low latency and performance are to user perceptions of application quality Have experience and opinions building with MERN stack, PostgreSQL and AWS Have built end-to-end products, from server code and data modelling at the database layer up to the UI Are open to an experimental environment where we move fast and make incremental changes Are comfortable taking ownership and driving technical decisions in an organisation Have experience managing API integrations and data ingestion You shipped an error to production and got so bothered by it, you decided to make it impossible for anyone at your company to commit the same class of error again You’ve forked a library to remove features you don’t want and simplify the interface Read through third party software to understand what it does and why it’s broken Bonus points for Experience with writing software for embedded projects Previous experience building/working in telecom software or infrastructure Previous experience building Web3 project - DeFi / Blockchain infrastructure Customer-centric and passionate about working in a small & focused team If you are interested in joining us We encourage you to reach out even if your experience doesn't precisely match this job description. We operate with a growth mindset and expect ourselves and our colleagues to have the capacity and desire for ongoing growth. Tell us a little bit about yourself, why you are interested in what we are doing, and any relevant experience you have. Attach your resume. Please include links to LinkedIn, Github, any code samples, or blog posts. email us with above information to career@wifidabba.com PreviousEngineering -> NextSmart Contract Engineer Last updated 1 year ago
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Eucloid is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support the development of data infrastructure on Databricks for our clients by participating in activities which may include starting from upstream and downstream technology selection to designing and building different components. Candidate will also be involved in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of the overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects, and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Qualifications B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of distributed computing and cloud databases like Redshift, BigQuery etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Dataproc, etc. Exposure to Hadoop & Shell scripting is a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, SQL only, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, and troubleshoot ETLs Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct, and document reported data defects. Create and maintain technical specification documentation. (ref:hirist.tech)
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description : is looking for a senior Data Engineer with hands-on expertise in Databricks to join our Data Platform team supporting various business applications. The ideal candidate will support the development of data infrastructure on Databricks for our clients by participating in activities which may include starting from upstream and downstream technology selection to designing and building different components. Candidate will also involve in projects like integrating data from various sources, managing big data pipelines that are easily accessible with optimized performance of the overall ecosystem. The ideal candidate is an experienced data wrangler who will support our software developers, database architects, and data analysts on business initiatives. You must be self-directed and comfortable supporting the data needs of cross-functional teams, systems, and technical solutions. Qualifications B.Tech/BS degree in Computer Science, Computer Engineering, Statistics, or other Engineering disciplines Min. 5 years of Professional work Experience, with 1+ years of hands-on experience with Databricks Highly proficient in SQL & Data model (conceptual and logical) concepts Highly proficient with Python & Spark (3+ year) Knowledge of distributed computing and cloud databases like Redshift, Big query etc. 2+ years of Hands-on experience with one of the top cloud platforms - AWS/GCP/Azure. Experience with Modern Data stack tools like Airflow, Terraform, dbt, Glue, Dataproc etc. Exposure to Hadoop & Shell scripting is a plus Min 2 years, Databricks 1 year desirable, Python & Spark 1+ years, SQL, any cloud exp 1+ year Responsibilities Design, implementation, and improvement of processes & automation of the Data infrastructure Tuning of Data pipelines for reliability & performance Building tools and scripts to develop, monitor, and troubleshoot ETLs Perform scalability, latency, and availability tests on a regular basis. Perform code reviews and QA data imported by various processes. Investigate, analyze, correct, and document reported data defects. Create and maintain technical specification documentation. (ref:hirist.tech)
Posted 1 week ago
15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
When You Join Proclink When you join Proclink, you will be working for a young and growing credit card business to be the Principal Architect for a customer lifecycle stage (Customer acquisitions or Customer management or Collections). You will be deeply involved with the business and technical stakeholders in comprehending the business needs, reviewing & discussing the PRDs, translating the PRDs to design, ARD, engineering project plan, Jira tickets, test cases, implementation plan. You will be working closely with and guiding the offshore as well as onshore engineering team to work through the sprints and all the way to UAT & post-prod support. You will have to talk to the external parties such as product / platform vendors, data providers, partners to the business to flesh out the details on product specs, data formats, latency, constraints etc. to ensure that the integration / customization happens according to the requirements and in favor of the clients tech environment. You should be ready to work on new architectures / design patterns to ensure scalability and efficiency. While you may not code, you will definitely need to be able to guide the team, review their work and defend your teams work in front of the senior engineering executives, including the CTO. You will lead technical reviews, code assessments, and solution designs to maintain high-quality product delivery. Ensure that the production applications are stable and operating as expected. Contribute to the clients strategic technology planning for the cards business. It is not necessary that you have worked in the past in credit cards or banking, but such experience will surely be a plus. If you are a person that waits for instructions and hesitates to take initiatives, does not have a well-thought out opinion, not a go-getter, not a networker, please do not apply. This is US based client so you should expect reasonable overlap with the EST hours. Job Specification You should have spent at least 15 years in the Technology industry, working with COEs or in GCCs or product companies; should have worked with global clients in offshore-onshore environment. You should have interacted directly with the business stakeholders, peers in the technology teams, C suite executives in your career. You should be proud of a few major initiatives you have taken and a few projects you have led. Experience in full-stack web application development experience across frontend, backend, and infrastructure, and have a solid understanding of technical fundamentals. Advanced knowledge of Object-Oriented Design, Microservices, Service Oriented Architecture and Application Integration Design Patterns. Solution architecture, Systems Design, Design Patterns, and frameworks implementation knowledge for enterprise solutions. Expert in MicroServices, Containerization. Should have experience in one of the UI technologies like Angular, React, Should know about designing and implementing secure solutions. Experience making architecture-level decisions that span teams, applications, and technologies with demonstrable improvements in the quality and speed of an engineering organizations output. Strong track record of recruiting and retaining high-performing engineering talent. Strong command of verbal and written communication to drive alignment and collaborate across functional teams. Ability to interface with and influence leaders across an organization with poise. Competency to foster and build a culture of success, accountability, and teamwork. Experience in guiding the development of observable systems with robust metrics and alerts. Ability to navigate in a nimble environment and drive success in unknown territory. Minimum of undergrad degree in Computer Science or a related field. Core Tech Stack: Node, TypeScript, JavaScript, AngularJS, RESTful APIs, Micro Services, AWS, Docker, Kubernetes, Agile and SCRUM. You should be ready to work from our Hyderabad office. There will be travel both within India and to client locations, but not more than 15-20%. (ref:hirist.tech)
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role We are seeking a skilled Java Developer with 2+ years of hands-on experience to join our team. The ideal candidate will have expertise in building and maintaining scalable backend systems, with a focus on GPS tracking and IoT solutions. You will contribute to enhancing the platforms performance, integrating new features, and ensuring seamless communication with GPS Responsibilities : Develop, maintain, and optimize backend services using Java. Work with Netty framework to handle high-performance network communication (TCP/UDP, HTTP, WebSocket). Design and manage database schemas using MySQL, or PostgreSQL. Implement and maintain GPS protocols for device communication. Use Maven/Gradle for dependency management and build automation. Optimize server performance for scalability and low-latency processing of GPS data. Troubleshoot and debug issues across the stack, including networking, database, and API layers. Participate in code reviews and adhere to best practices for code quality, testing, and Requirements : 2+ years of experience in Core Java development. Proficiency in Netty for asynchronous event-driven networking. Strong understanding of relational databases (MySQL/PostgreSQL). Experience with Maven/Gradle and build automation tools. Familiarity with RESTful APIs and WebSocket Skills : Strong problem-solving skills for debugging complex distributed systems. Ability to work independently and collaboratively in a team environment. Excellent communication skills for documenting and explaining technical decisions (ref:hirist.tech)
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description NIQ is looking for a Software Engineer to join our AI ML Engineering team. At NIQ, the Retail Measurement System (RMS) is a powerful analytics service that tracks product sales and market performance across a wide range of retail channels. It provides comprehensive, store-level data that helps businesses understand how their products are performing in the market, benchmark against competitors, and identify growth opportunities. Charlink and Jarvis models are used to predict product placements to its ideal hierarchy product tree. Learn more on the data driven approach to train models efficiently to predict placements based on Characteristics. Developing frontend applications to interact with ML models, integrating inference codes, and providing tools and patterns for enhancing our MLOps cycle. The ideal candidate has strong software design and programming experience, with some expertise in cloud computing, and big data technologies, and strong communication and management skills. You will be part of a diverse, flexible, and collaborative environment where you will be able to apply and develop your skills and knowledge working with unique data and exciting applications. Our Software Engineering platform is based in AngularJS, Java, React, Spring Boot, Typescript, Javascript, Sql and Snowflake, and we continue to adopt the best of breed in cloud-native, low-latency technologies. Who we are looking for: You have a strong entrepreneurial spirit and a thirst to solve difficult challenges through innovation and creativity with a strong focus on results You have a passion for data and the insights it can deliver You are intellectually curious with a broad range of interests and hobbies You take ownership of your deliverables You have excellent analytical communication and interpersonal skills You have excellent communication skills with both technical and non-technical audiences You can work with distributed teams situated globally in different geographies You want to work in a small team with a start-up mentality You can work well under pressure, prioritize work and be well organized. Relish tackling new challenges, paying attention to details, and, ultimately, growing professionally. Responsibilities Design, develop, and maintain scalable web applications using AngularJS for the front end and Java (Spring Boot) for the backend Collaborate closely with cross-functional teams to translate business requirements into technical solutions Optimize application performance, usability, and responsiveness Conduct code reviews, write unit tests, and ensure adherence to coding standards Troubleshoot and resolve software defects and production issues Contribute to architecture and technical documentation Qualifications 5 years of experience as a full stack developer Proficient in AngularJS(Version 12+), Typescript, Java, Spring Framework (especially Spring Boot) Experience with RESTful APIs and microservices architecture Solid understanding of HTML, CSS, JavaScript, and responsive web design Familiarity with relational databases (e.g., MySQL, PostgreSQL) Hands-on experience with version control systems (e.g., GitHub) and CI/CD tools Strong problem-solving abilities and attention to detail 3 - 5+ years of relevant software engineering experience Minimum B.S. degree in Computer Science, Computer Engineering, Information Technology or related field Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms Recharge and revitalize with help of wellness plans made for you and your family Plan your future with financial wellness tools Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 1 week ago
0.0 - 31.0 years
2 - 3 Lacs
Pune
On-site
We are looking for a Java Developer to join our growing team. The ideal candidate will have a strong foundation in Java and be comfortable working in a fast-paced, agile environment. You will be responsible for designing, developing, and maintaining Java-based applications that are high-volume, low-latency, and mission-critical. 🔧 Key Responsibilities: Design, implement, and maintain Java applications Participate in all phases of the development lifecycle Write well-designed, efficient, and testable code Ensure designs comply with specifications Prepare and produce releases of software components Support continuous improvement by investigating alternatives and technologies Collaborate with cross-functional teams to define and deliver new features 💼 What We Offer: Competitive salary and benefits Flexible working hours and remote work options Professional development and training opportunities A collaborative and innovative work environment Regards HR Dept.
Posted 1 week ago
5.0 years
0 Lacs
Roorkee, Uttarakhand, India
Remote
Company Description Miratech helps visionaries change the world. We are a global IT services and consulting company that brings together enterprise and start-up innovation. Today, we support digital transformation for some of the world's largest enterprises. By partnering with both large and small players, we stay at the leading edge of technology, remain nimble even as a global leader, and create technology that helps our clients further enhance their business. We are a values-driven organization and our culture of Relentless Performance has enabled over 99% of Miratech's engagements to succeed by meeting or exceeding our scope, schedule, and/or budget objectives since our inception in 1989. Miratech has coverage across 5 continents and operates in over 25 countries around the world. Miratech retains nearly 1000 full-time professionals, and our annual growth rate exceeds 25%. Job Description We are seeking a Senior Telecom Engineer to who will play a vital role in delivering high-quality voice call capabilities for advanced call and contact center solutions. In this position, you will serve as the technical expert responsible for designing, deploying, and maintaining our clients’ global voice infrastructure. The ideal candidate will have deep expertise in VoIP/SIP technologies, telecom carrier connectivity, and Ribbon (Sonus) SBCs, along with strong hands-on experience in troubleshooting complex network issues.our work will directly impact the experience of millions of users worldwide, making voice communication smarter, faster, and more resilient. Responsibilities: Collaborate with customer telecom and IT teams to deploy customized solutions and troubleshoot issues. Build and maintain SIP Trunk connectivity with customers and carriers, including interop sessions and activations. Provide operational support for the telecom network, analyze incidents, and implement preventive measures. Serve as an escalation point for critical alerts from SBCs and deliver root-cause analyses for outages. Manage telecom service providers and vendors, and oversee hardware/software deployments and new service rollouts. Develop testing plans, create technical documentation, and maintain SOPs for recurring tasks. Mentor junior engineers in troubleshooting and managing complex service issues. Understand product capabilities, limitations, and contribute to continuous improvements. Qualifications 5+ years of telecom engineer experience with VoIP/SIP voice applications. Strong knowledge of voice/data communications (SIP, TCP/IP, MPLS), VoIP protocols (H.248, G.711, G.729, WebRTC), and security (TLS, IPSEC, ACLs). High level knowledge of VoIP principles, protocols and CODECs such as H.248, SIP, G.711, G.729, WebRTC, MPLS, VPN, UDP, RTP, MTP etc. Experience with international routing (ITFS, iDID) and telephony design for high availability (99.99% SLA). Hands-on experience with Softswitches, SBCs (Sonus/Ribbon, AudioCodes), SIP proxies, and media servers (AudioCodes IPM-6310, FreeSWITCH). Skilled in Wireshark, Empirix, and RTP stream analysis (MOS, Jitter, Latency, Packet Loss). Ability to design, troubleshoot, and maintain complex global voice networks. Strong organizational skills and experience implementing telecom architecture changes in lab and production. We offer: Culture of Relentless Performance: join an unstoppable technology development team with a 99% project success rate and more than 30% year-over-year revenue growth. Competitive Pay and Benefits: enjoy a comprehensive compensation and benefits package, including health insurance, language courses, and a relocation program. Work From Anywhere Culture: make the most of the flexibility that comes with remote work. Growth Mindset: reap the benefits of a range of professional development opportunities, including certification programs, mentorship and talent investment programs, internal mobility and internship opportunities. Global Impact: collaborate on impactful projects for top global clients and shape the future of industries. Welcoming Multicultural Environment: be a part of a dynamic, global team and thrive in an inclusive and supportive work environment with open communication and regular team-building company social events. Social Sustainability Values: join our sustainable business practices focused on five pillars, including IT education, community empowerment, fair operating practices, environmental sustainability, and gender equality. Miratech is an equal opportunity employer and does not discriminate against any employee or applicant for employment on the basis of race, color, religion, sex, national origin, age, disability, veteran status, sexual orientation, gender identity, or any other protected status under applicable law.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France