Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Swirl AI turns every product page into a human-like sales conversation, blending LLMs, video understanding, real-time retrieval and agent orchestration. We’ve rocketed from 0 → Eight Figures of ARR in few months , and demand is outpacing our 7-person tech team. So, we need one extraordinary engineer to own the core of our platform and push it to Silicon Valley scale. What you’ll do First 90 days • Deep-dive into our multimodal stack (OpenAI-/Claude-based LLMs, custom SLMs, Azure Video Indexer, Pinecone/RAG, LangGraph) • Ship v2 of our SKU-specific “video-+-text” agent (latency < 500 ms, zero hallucinations). • Productionize auto-evaluation + guardrails (sentiment, brand safety) • Stand up voice & XR modalities and experiment with on-device inference. 6-12 months • Real-time GEO optimisation • Lead design of our “agent marketplace”: plug-and-play warranty, finance-offer & upsell agents. • Drive infra hardening to handle 10 M+ interactions / month across multiple Fortune-500 sites. Long term • Build and mentor an elite AI / ML/systems team. • Architect the path to: self-serve onboarding, global content network. You might be a fit if you Have 1-5+ yrs building production systems at scale Shipped deep-learning products end-to-end : data pipeline ➜ model training/fine-tuning ➜ safety/guardrails ➜ Serving (K8s, CUDA, Triton, Ray, or similar). Hands-on with multimodal (video, speech, vision) and agentic/RAG architectures. Fluent in Python/Typescript/Go; can debug distributed systems at 2 a.m. and still think product. Thrive in zero-to-one chaos: sketch, hack, iterate, talk to customers, then rewrite for scale. Believe ownership > titles , data-driven rigor > ego, and shipping weekly > polishing forever. What success looks like p50 latency < 500 ms for a multimodal query across 100k videos & 10M documents. Swirl AI becomes the reference “AI Sales Agent” demo in every Fortune-100 board deck. We out-innovate incumbents (Salesforce, Adobe, Shopify) by shipping features 4× faster with a team 10× leaner. Comp, stage & perks Top-of-market cash + meaningful founding equity (we’re an early-stage, venture-backed rocket). Choose your rig: M-series MacBook + 4k monitor or Linux workstation with RTX 6000. Remote-first (US / EU / India time overlap) with quarterly off-sites in Dubai, SF & Bangalore . Visa support, health + mental-wellness stipend, conference budget, unlimited books. Report directly to Kaizad Hansotia (Founder/CEO) & Akshil Shah (CTO) and shape the product that’s already trusted by BYD, Toyota, LG & Lennox. How to apply Send GitHub/LinkedIn + 2-3 sentences on the toughest system you’ve built to careers@goswirl.ai . Side-projects, papers, or a Loom walk-through of your favourite model-ops trick = huge plus. We move fast: expect a 48-hr reply → 1 technical deep-dive → paid take-home sprint → offer. Join us to make product pages talk, show & sell — at human level, globally.
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Go Programming Language Good to have skills : Java Full Stack Development Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Development Engineer, you will engage in a dynamic work environment where you will analyze, design, code, and test various components of application code across multiple clients. Your day will involve collaborating with team members to ensure the successful implementation of software solutions, while also performing maintenance and enhancements to existing applications. You will be responsible for delivering high-quality code and contributing to the overall success of the projects you are involved in, ensuring that client requirements are met effectively and efficiently. Roles & Responsibilities: Develop, test, and maintain scalable back-end services, APIs, and microservices; Full unit test coverage is mandatory. Design and implement robust, secure, and reliable systems to handle complex workflows. Collaborate with US based cross-functional teams to gather requirements and create solutions tailored to Enterprise needs. Optimize existing back-end systems for performance, scalability, and maintainability. Implement and enforce best practices for code quality, testing, deployment, and documentation. Troubleshoot and resolve back-end system issues, ensuring high availability. Professional & Technical Skills: 5+ years of professional software engineering experience in a product-oriented, live production environment. 2+ years developing Golang software. 2+ years’ experience building on microservices architecture with AWS Strong background in building scalable, reliable, and low-latency systems. Experience with back-end frameworks including GraphQL and Rest. Proficiency in working with relational databases like PostgreSQL. Strong understanding of modern web protocols, security concerns, and system integrations. Additional Information: - The candidate should have minimum 3 years of experience in Go Programming Language. - This position is based at our Hyderabad office. - B.Tech required., 15 years full time education
Posted 2 weeks ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Background/Context: Markets Technology is a front to back group that works closely with our Markets businesses to design and implement technology solutions that achieve our challenging business and technical goals. This makes the group an exciting and fast-paced environment to work in. As a franchise, Citi’s market penetration is second to none, across asset class and across geographical boundary, making it a truly unique place to work. This particular role is within the FX Technology group, with a primary focus of delivering to the STIRT business, but in addition there is an expectation that the successful candidate will also be involved in broad Markets initiatives. The selected candidate will be part of a team which is responsible for functional areas within the STIRT pricing and distribution development group within the FX technology. The key goal is to build out an increased electronic franchise across all products covered by the FX desks globally. Job Purpose: We are seeking a Java developer with particular focus on delivery of electronic trading software for a demanding, fast-paced front office technology client base. The role will have a priority focus around delivering best in class pricing and distribution technology to the FX business. Looking for developers with following characteristics: As an individual contributor, ability to manage, prioritize work, deliver to deadlines Track record of 3-5 years of server-side Java development skills Familiarity with one of more of the following additional technical areas: SpringBoot, Cloud native technologies, Messaging Middleware (Solace), Low latency programming, No SQL databases, python/groovy scripting A practical understanding of enterprise application delivery in a structured environment Ability to communicate effectively across multiple levels of the organization, across multiple locales Motivated by working on high profile project that benefits other developers Strong software Engineering skills to build technology ‘in the right way’, sustaining long term success Developer will have the opportunity to: Build software that solves challenging front office business problems Be involved in project from successful pilot to full-scale rollout Extend the project (new features, enhancing existing features) Contribute to strategic longer-term technical direction Work closely with Front Office colleagues across multiple locations and business lines Build your profile with senior technologists across the Markets Technology group Key Responsibilities: The developer will be accountable for: Coordinating with stakeholders to deliver work-items in line with expectations Implementing solutions to issues identified Identifying, estimating and implementing enhancements Contribute to shaping the future technical direction of the product Providing development support response to incidents and requests raised through support channels. Communicating project progress and promoting achievements Knowledge/Experience: At least 3-5 years commercial Java development experience. Experience developing and supporting mission critical applications. Experience designing and developing distributed systems using a range of middleware and database products. Knowledge of FX an advantage with an appreciation of whole pricing lifecycle. Knowledge of and exposure to regulatory environment impacting banking industry will be an advantage. Experience working on a mature development in a large collaborative environment. Understanding of DevOps chain – CI/CD, cloud deployment etc. Skills: Proficient in core Java development (Java17 and beyond). Middleware Technologies Expertise in Unix (Linux) commands & scripting. Database skills – Oracle and additionally exposure to NoSQL DBs Process and tools to produce well written low defect rate code. Experience of collaboration tools (source control). Competencies: Strong aptitude for analysis and problem solving Strong written and verbal communication skills Attention to detail Self-Motivated Willingness to learn as well as contribute to the wider team Excellent planning and organizational skills Qualifications: A good academic background, with at least an Under-graduate degree in a Technical subject. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 2 weeks ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Hey There 👋 At Saleshandy, we're building the Cold Email Outreach platform of the future. We're building a product toward eliminating manual processes and helping companies generate more replies/book more meetings / generate leads (faster). Since our founding in 2016, we've grown to become a profitable, 100% geographically dispersed team of 65+ high-performing happy people who are dedicated to building a product that our customers love. What’s the Role About? Ever wondered how Saleshandy schedules millions of emails and still feels lightning-fast? Behind that magic is performance engineering. We’re hiring a Performance Engineer who thrives on making systems faster, leaner, and more reliable across backend, frontend, and infrastructure. Your mission: eliminate latency, fix CPU/memory bottlenecks, optimize queries, tame queues, and guide teams to build with performance in mind. This isn’t just about fire-fighting, it’s about owning speed as a product feature. You’ll work across the stack and use deep diagnostics, smart tooling, and system intuition to make things fly. Why Join Us? Purpose: Your work will directly impact page speeds, email throughput, scale. At Saleshandy, performance isn’t a luxury, it’s part of our premium promise. Growth: You’ll operate across multiple teams and tech layers, Node.js, MySQL, Redis, React, Kafka, ClickHouse, AWS, with the freedom to shape how we build fast systems. Motivation: If you’ve ever celebrated shaving 500ms off a page load, or chased a memory leak across 3 services just for fun, this is your home. We celebrate engineers who care about P99s, flamegraphs, and cache hits. Your Main Goals Identify and Eliminate Backend Bottlenecks (within 90 days) Run deep diagnostics using Clinic.js, heap snapshots, GC logs, and flamegraphs. Tackle high CPU/memory usage, event loop stalls, and async call inefficiencies in Node.js. Goal: Cut backend P95 response times by 30–40% for key APIs. Optimize MySQL Query Performance & Configuration (within 60 days) Use slow query logs, EXPLAIN, Percona Toolkit, and indexing strategies to tune queries and schema. Tune server-level configs like innodb_buffer_pool_size. Target: Eliminate top 10 slow queries and reduce DB CPU usage by 25%. Improve Frontend Performance & Load Time (within 90 days) Audit key frontend flows using Lighthouse, Core Web Vitals, asset audits. Drive improvements via lazy loading, tree-shaking, and code splitting. Goal: Get homepage and dashboard load times under 1.5s for 95% users. Make Infra & Monitoring Observability-First (within 120 days) Set up meaningful alerts and dashboards using Grafana, Loki, Tempo, Prometheus. Lead infra-level debugging — thread stalls, IO throttling, network latency. Goal: Reduce time-to-detect and time-to-resolve for perf issues by 50%. Important Tasks First 30 Days – System Performance Audit Do a full audit of backend, DB, infra, and frontend performance. Identify critical pain points and quick wins. Debug a Live Performance Incident Catch and resolve a real-world performance regression. Could be Node.js memory leak, a slow MySQL join, or Redis job congestion. Share a full RCA and fix. Create and Share Performance Playbooks (by Day 45) Build SOPs for slow query debugging, frontend perf checks, Redis TTL fixes, or Node.js memory leaks. Turn performance tuning into team sport. Guide Teams on Performance-Aware Development (within 90 days) Create internal micro-trainings or async reviews to help devs write faster APIs, reduce DB load, and spot regressions earlier. Use AI or Smart Tooling in Diagnostics Try out tools like Copilot for test coverage, or use AI-powered observability tools (e.g. Datadog AI, Loki queries, etc.) to accelerate diagnostics. Build Flamegraph/Profiling Baselines Set up and maintain performance profiling baselines (using Clinic.js, 0x, etc.) so regressions can be caught before they ship. Review Queues and Caching Layer Identify performance issues in Redis queues — retries, TTL delays, locking — and tune caching strategies across app and DB. Contribute to Performance Culture Encourage tracking of real metrics: TTI, DB query time, API P95s. Collaborate with product and engineering to define what “fast enough” means. Experience Level: 3–5 years Tech Stack: Node.js, MySQL, Redis, Grafana, Prometheus, Clinic.js, Percona Toolkit Culture Fit – Are You One of Us? We're a fast-moving, globally distributed SaaS team where speed matters not just in product, but in how we work. We believe in ownership, system thinking, and real accountability. If you like solving hard problems, value simplicity, and hate regressions, you’ll thrive here.
Posted 2 weeks ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Requirements Description and Requirements Position Summary A Big Data (Hadoop) Administrator responsible for supporting the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Strong expertise in DevOps practices, automation, and scripting (e.g . Ansible , Azure DevOps, Shell, Python ) to streamline operations and improve efficiency is highly valued. Job Responsibilities Assist in the installation, configuration, and maintenance of Cloudera Data Platform (CDP) and Cloudera Flow Management (CFM) streaming clusters on RedHat Linux. Perform routine monitoring, troubleshooting, and issue resolution to ensure the stability and performance of Hadoop clusters. Develop and maintain scripts (e.g., Python, Bash, Ansible) to automate operational tasks and improve system efficiency. Collaborate with cross-functional teams, including application development, infrastructure, and operations, to support business requirements and implement new features. Implement and follow best practices for cluster security, including user access management and integration with tools like Apache Ranger and Kerberos. Support backup, recovery, and disaster recovery processes to ensure data availability and business continuity. Conduct performance tuning and optimization of Hadoop clusters to enhance system efficiency and reduce latency. Analyze logs and use tools like Splunk to debug and resolve production issues. Document operational processes, maintenance procedures, and troubleshooting steps to ensure knowledge sharing and consistency. Stay updated on emerging technologies and contribute to the adoption of new tools and practices to improve cluster management. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. Experience (In Years) 7+ Years Total IT experience & 4+ Years relevant experience in Big Data database Technical Skills Big Data Platform Management : Big Data Platform Management: Knowledge in managing and optimizing the Cloudera Data Platform, including components such as Apache Hadoop (YARN and HDFS), Apache HBase, Apache Solr , Apache Hive, Apache Kafka, Apache NiFi , Apache Ranger, Apache Spark, as well as JanusGraph and IBM BigSQL . Automation and Scripting : Expertise in automation tools and scripting languages such as Ansible, Python, and Bash to streamline operational tasks and improve efficiency. DevOps Practices : Proficiency in DevOps tools and methodologies, including CI/CD pipelines, version control systems (e.g., Git), and infrastructure-as-code practices. Monitoring and Troubleshooting : Experience with monitoring and observability tools such as Splunk, Elastic Stack, or Prometheus to identify and resolve system issues. Linux Administration : Solid knowledge of Linux operating systems, including system administration, troubleshooting, and performance tuning. Backup and Recovery : Familiarity with implementing and managing backup and recovery processes to ensure data availability and business continuity. Security and Access Management : Understanding of security best practices, including user access management and integration with tools like Kerberos. Agile Methodologies : Knowledge of Agile practices and frameworks, such as SAFe , with experience working in Agile environments. ITSM Tools : Familiarity with ITSM processes and tools like ServiceNow for incident and change management. Other Critical Requirements Excellent Analytical and Problem-Solving skills Ability to work in a 24x7 rotational shift to support Hadoop platforms and ensure high availability. Excellent written and oral communication skills, including the ability to clearly communicate/articulate technical and functional issues with conclusions and recommendations to stakeholders. Prior experience in handling state side and offshore stakeholders Experience in creating and delivering Business presentations. Demonstrate ability to work independently and in a team environment Demonstrate willingness to learn and adopt new technologies and tools to improve operational efficiency About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!
Posted 2 weeks ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers further. This is a world of more possibilities, more innovation, more openness, and the sky is the limit thinking in a cloud-enabled world. Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. Within Azure Data, the data integration team builds data gravity on the Microsoft Cloud. Massive volumes of data are generated – not just from transactional systems of record, but also from the world around us. Our data integration products – Azure Data Factory and Power Query make it easy for customers to bring in, clean, shape, and join data, to extract intelligence. The Fabric Data Integration team is currently seeking a Software Engineer to join their team. This team is in charge of designing, building, and operating a next generation service that transfers large volumes of data from various source systems to target systems with minimal latency while providing a data centric orchestration platform. The team focuses on advanced data movement/replication scenarios while maintaining user-friendly interfaces. Working collaboratively, the team utilizes a range of technologies to deliver high-quality products at a fast pace. We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served. Responsibilities Build cloud scale products with focus on efficiency, reliability and security. Build and maintain end-to-end Build, Test and Deployment pipelines. Deploy and manage massive Hadoop, Spark and other clusters. Contribute to the architecture & design of the products. Triaging issues and implementing solutions to restore service with minimal disruption to the customer and business. Perform root cause analysis, trend analysis and post-mortems. Owning the components and driving them end to end, all the way from gathering requirements, development, testing, deployment to ensuring high quality and availability post deployment. Embody our culture and values Embody our culture and values Qualifications Required/Minimum Qualifications Bachelor's degree in computer science, or related technical discipline AND 2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred/Additional Qualifications Bachelor's Degree in Computer Science or related technical field AND 1+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java OR master’s degree in computer science or related technical field AND 1+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java OR equivalent experience. 1+ years of experience in developing and shipping system level features in an enterprise production backend server system. Experience building Distributed Systems with reliable guarantees. Understanding of data structures, algorithms, and distributed systems. Solve problems by always leading with passion and empathy for customers. Have a desire to work collaboratively, solve problems with groups, find win/win solutions and celebrate successes. Enthusiasm, integrity, self-discipline, results-orientation in a fast-paced environment. 1+ years of experience in developing and shipping system level features in an enterprise production backend server system. 1+ years of experience building and supporting distributed cloud services with production grade. #azdat #azuredata #azdataintegration Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 2 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Are you passionate about building and maintaining large-scale production systems that support advanced data science and machine learning applications? Do you want to join a team at the heart of NVIDIA's data-driven decision-making culture? If so, we have a great opportunity for you! NVIDIA is seeking a Senior Site Reliability Engineer (SRE) for the Data Science & ML Platform(s) team. The role involves designing, building, and maintaining services that enable real-time data analytics, streaming, data lakes, observability and ML/AI training and inferencing. The responsibilities include implementing software and systems engineering practices to ensure high efficiency and availability of the platform, as well as applying SRE principles to improve production systems and optimize service SLOs. Additionally, collaboration with our customers to plan implement changes to the existing system, while monitoring capacity, latency, and performance is part of the role. To succeed in this position, a strong background in SRE practices, systems, networking, coding, capacity management, cloud operations, continuous delivery and deployment, and open-source cloud enabling technologies like Kubernetes and OpenStack is required. Deep understanding of the challenges and standard methodologies of running large-scale distributed systems in production, solving complex issues, automating repetitive tasks, and proactively identifying potential outages is also necessary. Furthermore, excellent communication and collaboration skills, and a culture of diversity, intellectual curiosity, problem solving, and openness are essential. As a Senior SRE at NVIDIA, you will have the opportunity to work on innovative technologies that power the future of AI and data science, and be part of a dynamic and supportive team that values learning and growth. The role provides the autonomy to work on meaningful projects with the support and mentorship needed to succeed, and contributes to a culture of blameless postmortems, iterative improvement, and risk-taking. If you are seeking an exciting and rewarding career that makes a difference, we invite you to apply now! What You’ll Be Doing Develop software solutions to ensure reliability and operability of large-scale systems supporting machine-critical use cases. Gain a deep understanding of our system operations, scalability, interactions, and failures to identify improvement opportunities and risks. Create tools and automation to reduce operational overhead and eliminate manual tasks. Establish frameworks, processes, and standard methodologies to enhance operational maturity, team efficiency, and accelerate innovation. Define meaningful and actionable reliability metrics to track and improve system and service reliability. Oversee capacity and performance management to facilitate infrastructure scaling across public and private clouds globally. Build tools to improve our service observability for faster issue resolution. Practice sustainable incident response and blameless postmortems What We Need To See Minimum of 6+ years of experience in SRE, Cloud platforms, or DevOps with large-scale microservices in production environments. Master's or Bachelor's degree in Computer Science or Electrical Engineering or CE or equivalent experience. Strong understanding of SRE principles, including error budgets, SLOs, and SLAs. Proficiency in incident, change, and problem management processes. Skilled in problem-solving, root cause analysis, and optimization. Experience with streaming data infrastructure services, such as Kafka and Spark. Expertise in building and operating large-scale observability platforms for monitoring and logging (e.g., ELK, Prometheus). Proficiency in programming languages such as Python, Go, Perl, or Ruby. Hands-on experience with scaling distributed systems in public, private, or hybrid cloud environments. Experience in deploying, supporting, and supervising services, platforms, and application stacks. Ways To Stand Out From The Crowd Experience operating large-scale distributed systems with strong SLAs. Excellent coding skills in Python and Go and extensive experience in operating data platforms. Knowledge of CI/CD systems, such as Jenkins and GitHub Actions. Familiarity with Infrastructure as Code (IaC) methodologies and tools. Excellent interpersonal skills for identifying and communicating data-driven insights. NVIDIA leads the way in groundbreaking developments in Artificial Intelligence, High-Performance Computing, and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions, from artificial intelligence to autonomous cars. NVIDIA is looking for exceptional people like you to help us accelerate the next wave of artificial intelligence. JR1999109
Posted 2 weeks ago
4.0 years
0 Lacs
India
On-site
Aviso is the AI compass that guides sales and go-to-market teams to close more deals, accelerate growth, and find their Revenue True North. Aviso AI delivers revenue intelligence, drives informed team-wide actions and course corrections, and gives precise guidance so sellers and teams don't get lost in the fog of CRM and augment themselves with predictive AI. With demonstrated results across Fortune 500 companies and industry leaders such as Dell, Splunk, Nuance, Elastic, Github, and RingCentral, Aviso works at the frontier of predictive AI to help teams close more deals and drive more revenue. Aviso AI has generated 305 billion insights, analyzed $180B in pipeline, and helped customers win $100B in deals. Companies use Aviso to drive more revenue, achieve goals faster, and win in bold, new frontiers. By using Aviso's guided-selling tools instead of conventional CRM systems, sales teams close 20% more deals with 98%+ accuracy, and reduce spending on non-core CRM licenses by 30%. Job Description: We are looking for a skilled and motivated Data Engineer to join our growing team, focused on building fast, scalable, and reliable data platforms that power insights across the organization. If you enjoy working with large-scale data systems and solving complex challenges, this is the role for you. What You’ll Do: Grow our analytics capabilities by building faster and more reliable tools to handle petabytes of data daily. Brainstorm and develop new platforms to serve data to users in all shapes and forms, with low latency and horizontal scalability. Troubleshoot and diagnose problems across the entire technical stack. Design and develop real-time event pipelines for data ingestion and real-time dashboards. Develop complex and efficient functions to transform raw data sources into powerful, reliable components of our data lake. Design and implement new components using emerging technologies in the Hadoop ecosystem, ensuring the successful execution of various projects. Skills That Will Help You Succeed in This Role: Strong hands-on experience (4+ years) with Apache Spark , preferably PySpark . Excellent programming and debugging skills in Python . Experience with scripting languages such as Python , Bash , etc. Solid experience with databases such as SQL , MongoDB , etc. Good to have experience with AWS and cloud technologies such as Amazon S3 .
Posted 2 weeks ago
3.0 years
0 Lacs
India
Remote
Remote Job Role : Full Stack Software Engineer with AI Location : Indian (Remote) We are seeking an innovative Full Stack Engineer to AI inclusive applications. These descriptions build upon your existing requirements, integrating AI-specific responsibilities, skills, and qualifications. Join our team, dedicated to developing cutting-edge applications leveraging Retrieval-Augmented Generation (RAG) and other AI technologies. You will build end-to-end Title: Full Stack Engineer (AI/RAG Applications) Key Responsibilities • Design, build, and maintain scalable full-stack solutions, integrating sophisticated AI models and advanced data retrieval mechanisms into intuitive, scalable, and responsive applications. Applications integrating Retrieval-Augmented Generation (RAG)-based AI solutions. • Develop responsive, intuitive user interfaces leveraging modern JavaScript frameworks (React, Angular • Design, build, and maintain AI-driven full-stack applications leveraging Retrieval-Augmented Generation (RAG, Vue) to deliver seamless AI-driven user experiences. • Build robust backend APIs and microservices that interface with AI models,) and related AI technologies vector databases, and retrieval engines. • Integrate Large Language Models (LLMs), embeddings, vector databases, and search algorithms. • Collaborate closely with AI/ML specialists, product owners, and UX designers to translate complex AI capabilities into user-friendly interfaces. • Implement robust into applications. • Collaborate closely with data scientists, machine learning engineers, and product teams to define APIs and backend services to requirements, optimize AI integration, and deliver innovative features support RAG model integrations and real-time data retrieval. • Develop responsive front-end interfaces utilizing modern frameworks. • Create and manage RESTful and GraphQL APIs to facilitate efficient, secure data exchange between frontend (React, Angular, Vue) that seamlessly interact with AI backend services. • Ensure robust security measures, scalability, and performance components, backend services, and AI engines. • Participate actively in code reviews, architecture decisions, and Agile ceremonies, ensuring best practices in software engineering and optimization of AI-integrated applications. • Participate actively in code reviews, technical design discussions, and agile ceremonies. • Continuously AI integration. • Troubleshoot, debug, and enhance performance of both frontend and backend systems, focusing on AI latency, accuracy explore emerging AI trends and technologies, proactively recommending improvements, and scalability. • to enhance product capabilities. Qualifications • Bachelor’s degree in computer science, Engineering, or a related technical discipline. • 3+ years of experience in full-stack software engineering, with demonstrated, Engineering, or related discipline. • Minimum of 3+ years of experience in full-stack software development. • Proficiency experience integrating AI/ML services. • Strong proficiency in front-end technologies including HTML, CSS, JavaScript, and frameworks such as React, Angular, or Vue.js. • in frontend technologies (HTML, CSS, JavaScript Backend development expertise with Node, React, Angular, Vue). • Strong backend.js, Python, Java, or .NET, particularly experience building RESTful APIs and microservices. skills in Node.js, Python, Java, or .NET. • Hands-on experience integrating AI/ML models, particularly NLP- Experience integrating AI and NLP models, including familiarity with Retrieval-Augmented Generation (RAG), OpenAI APIs, LangChain, or similar frameworks. • Proficiency with relational-based Large Language Models (e.g., GPT, BERT, LLaMA). • Familiarity with RAG architectures, vector databases (e.g., Pinecone, We and NoSQL databases (PostgreSQL, MongoDB, etc.) and familiarity with vector databases (aviate, Milvus), and embedding techniques. • Experience with RESTful APIs, GraphQL, microservices, and cloud-native architecture (AWS, Azure, GCP). e.g., Pinecone, Chroma, Weaviate) is a plus. • Solid understanding- Solid understanding of databases (SQL/NoSQL) and data modeling best practices. • Experience with version control systems (Git), CI/CD pipelines, and containerization (Docker of version control (Git) and CI/CD pipelines. Thanks, and Regards Saurabh Kumar | Lead Recruiter saurabh.yadav@ampstek.com | www.ampstek.com https://www.linkedin.com/in/saurabh-kumar-yadav-518927a8/ Call to : +1 609-360-2671
Posted 2 weeks ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At Dario, Every Day is a New Opportunity to Make a Difference. We are on a mission to make better health easy. Every day our employees contribute to this mission and help hundreds of thousands of people around the globe improve their health. How cool is that? We are looking for passionate, smart, and collaborative people who have a desire to do something meaningful and impactful in their career. We are looking for a talented Senior Software developer to take responsibility for DarioHealth solutions and products. As a senior Backend developer, you will Join a growing Agile team of experienced developers building production applications, backend services, data solutions and platform infrastructure. Responsibilities Development high scale cloud-based solutions in Health area Development in cutting edge technologies Position will be involved in design and implementation of low latency, high availability and high-performance services Development in very dynamic environment which provides ability to learn and implement new technologies Create RESTful APIs that provide unprecedented access to data via client apps. Produce efficient and a fully tested, and documented code. Be part of a talented and motivated Agile team, therefore a commitment to collaborative problem solving, sophisticate design, and the creation of quality products are essential. Requirements: 4+ years’ experience in back-end development 2+ years in NodeJS, Javascript ES6, Typescript. Expertise in using AI development tools. Experience in MongoDB, PostgreSQL, MySQL or equivalent Strong experience with creating REST and RESTful services Strong understanding of microservices, event-driven architectures, serverless and container technologies (Lambda, Docker), and container orchestration platforms such as Kubernetes, OpenShift, or equivalent Familiarity with CI/CD pipelines and related tools for unit testing (e.g. JUnit), static and dynamic code scanning (e.g. AppScan, Fortify), and build tools such as Jenkins. Familiarity with AWS SDKs Experience with AWS services such as EKS, RDS, API GW Experience in google cloud, Firebase services AWS Certified Developer/Solution Architect - Big Advantage Experience scaling up a B2B2C and B2C solutions - Big Advantage DarioHealth promotes diversity of thought, culture and background, which connects the entire Dario team. We believe that every member on our team enriches our diversity by exposing us to a broad range of ways to understand and engage with the world, identify challenges, and to discover, design and deliver solutions. We are passionate about building and sustaining an inclusive and equitable working and learning environments for all people, and do not discriminate against any employee or job candidate. ***
Posted 2 weeks ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Are you passionate about building and maintaining large-scale production systems that support advanced data science and machine learning applications? Do you want to join a team at the heart of NVIDIA's data-driven decision-making culture? If so, we have a great opportunity for you! NVIDIA is seeking a Senior Site Reliability Engineer (SRE) for the Data Science & ML Platform(s) team. The role involves designing, building, and maintaining services that enable real-time data analytics, streaming, data lakes, observability and ML/AI training and inferencing. The responsibilities include implementing software and systems engineering practices to ensure high efficiency and availability of the platform, as well as applying SRE principles to improve production systems and optimize service SLOs. Additionally, collaboration with our customers to plan implement changes to the existing system, while monitoring capacity, latency, and performance is part of the role. To succeed in this position, a strong background in SRE practices, systems, networking, coding, capacity management, cloud operations, continuous delivery and deployment, and open-source cloud enabling technologies like Kubernetes and OpenStack is required. Deep understanding of the challenges and standard methodologies of running large-scale distributed systems in production, solving complex issues, automating repetitive tasks, and proactively identifying potential outages is also necessary. Furthermore, excellent communication and collaboration skills, and a culture of diversity, intellectual curiosity, problem solving, and openness are essential. As a Senior SRE at NVIDIA, you will have the opportunity to work on innovative technologies that power the future of AI and data science, and be part of a dynamic and supportive team that values learning and growth. The role provides the autonomy to work on meaningful projects with the support and mentorship needed to succeed, and contributes to a culture of blameless postmortems, iterative improvement, and risk-taking. If you are seeking an exciting and rewarding career that makes a difference, we invite you to apply now! What You’ll Be Doing Develop software solutions to ensure reliability and operability of large-scale systems supporting machine-critical use cases. Gain a deep understanding of our system operations, scalability, interactions, and failures to identify improvement opportunities and risks. Create tools and automation to reduce operational overhead and eliminate manual tasks. Establish frameworks, processes, and standard methodologies to enhance operational maturity, team efficiency, and accelerate innovation. Define meaningful and actionable reliability metrics to track and improve system and service reliability. Oversee capacity and performance management to facilitate infrastructure scaling across public and private clouds globally. Build tools to improve our service observability for faster issue resolution. Practice sustainable incident response and blameless postmortems What We Need To See Minimum of 6+ years of experience in SRE, Cloud platforms, or DevOps with large-scale microservices in production environments. Master's or Bachelor's degree in Computer Science or Electrical Engineering or CE or equivalent experience. Strong understanding of SRE principles, including error budgets, SLOs, and SLAs. Proficiency in incident, change, and problem management processes. Skilled in problem-solving, root cause analysis, and optimization. Experience with streaming data infrastructure services, such as Kafka and Spark. Expertise in building and operating large-scale observability platforms for monitoring and logging (e.g., ELK, Prometheus). Proficiency in programming languages such as Python, Go, Perl, or Ruby. Hands-on experience with scaling distributed systems in public, private, or hybrid cloud environments. Experience in deploying, supporting, and supervising services, platforms, and application stacks. Ways To Stand Out From The Crowd Experience operating large-scale distributed systems with strong SLAs. Excellent coding skills in Python and Go and extensive experience in operating data platforms. Knowledge of CI/CD systems, such as Jenkins and GitHub Actions. Familiarity with Infrastructure as Code (IaC) methodologies and tools. Excellent interpersonal skills for identifying and communicating data-driven insights. NVIDIA leads the way in groundbreaking developments in Artificial Intelligence, High-Performance Computing, and Visualization. The GPU, our invention, serves as the visual cortex of modern computers and is at the heart of our products and services. Our work opens up new universes to explore, enables amazing creativity and discovery, and powers what were once science fiction inventions, from artificial intelligence to autonomous cars. NVIDIA is looking for exceptional people like you to help us accelerate the next wave of artificial intelligence. JR1999109
Posted 2 weeks ago
14.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Senior Site Reliability Engineer (SRE) – Azure Focused Location :Pune Experience : 7–14 Years Notice Period : Immediate to 30 Days Key Responsibilities Ensure availability, latency, performance, and efficiency of global eCommerce sites Design and develop E2E observability dashboards and tooling Maintain error budgets, meet SLOs, and drive incident response automation Collaborate with engineering teams to build highly reliable systems Drive proactive monitoring, root cause analysis (RCA), and system optimization Build tools to improve incident management and software delivery processes Optimize cloud infrastructure for performance and cost, primarily in Azure Promote observability best practices and help define instrumentation standards Required Skills 7–14 years in Site Reliability Engineering or DevOps Experience supporting cloud production environments (Azure preferred) Expertise with monitoring tools: Splunk, Dynatrace, Datadog, Grafana, New Relic Strong scripting skills – Python preferred (Shell acceptable) Hands-on with CI/CD tools – GitLab, Jenkins, Azure DevOps, etc. Proficient in Kubernetes, Docker, Terraform, and Ansible Knowledge of configuration management – Ansible, Chef, or AWS CodeDeploy Proven troubleshooting skills with strong ownership mindset Passionate about automation, observability, and platform reliability
Posted 2 weeks ago
3.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Responsibilities Write reusable, testable, and efficient code. Design and implement low-latency, high-availability, and performant applications. Design and create RESTful APIs for internal and partner consumption. Implement security and data protection. Debug code on the platform (written by self or others) to find the root cause of any ongoing issues and rectify them. Database query optimization & design and implement scalable database schemas that represent and support business processes. Implement web applications in Python, SQL, Javascript, HTML, and CSS. Provide technical leadership to teammates through coaching and mentorship. Delegate tasks and set deadlines. Monitor team performance and report on performance. Collaborate with other software developers, business analysts to plan, design and develop applications. Maintain client relationships and ensure Company deliverables meet highest expectations of the client. Qualification & Skills Mandatory 3+ years experience in Django/Flask. Solid database skills in relational databases. Knowledge of how to build and use RESTful APIs. Strong knowledge of version control. Hands-on experience working on Linux systems. Familiarity with ORM (Object Relational Mapper) libraries. Experience with SQL Alchemy is a plus. Knowledge of Redis Strong understanding of peer review best practices Hands-on experience in deployment processes. Good to Have Proficiency in AWS, Azure, or GCP (any one) Experience with Docker
Posted 2 weeks ago
15.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Drive the Future of Data-Driven Entertainment Are you passionate about working with big data? Do you want to shape the direction of products that impact millions of users daily? If so, we want to connect with you. We’re seeking a leader for our Data Engineering team who will collaborate with Product Managers, Data Scientists, Software Engineers, and ML Engineers to support our AI infrastructure roadmap. In this role, you’ll design and implement the data architecture that guides decision-making and drives insights, directly impacting our platform’s growth and enriching user experiences. As a part of SonyLIV, you’ll work with some of the brightest minds in the industry, access one of the most comprehensive data sets in the world and leverage cutting-edge technology. Your contributions will have a tangible effect on the products we deliver and the viewers we engage. The ideal candidate will bring a strong foundation in data infrastructure and data architecture, a proven record of leading and scaling data teams, operational excellence to enhance efficiency and speed, and a visionary approach to how Data Engineering can drive company success. If you’re ready to make a significant impact in the world of OTT and entertainment, let’s talk. AVP, Data Engineering – SonyLIV Location: Bangalore Responsibilities: Define the Technical Vision for Scalable Data Infrastructure: Establish a robust technical strategy for SonyLIV’s data and analytics platform, architecting a scalable, high-performance data ecosystem using modern technologies like Spark, Kafka, Snowflake, and cloud services (AWS/GCP). Lead Innovation in Data Processing and Architecture: Advance SonyLIV’s data engineering practices by implementing real-time data processing, optimized ETL pipelines, and streaming analytics through tools like Apache Airflow, Spark, and Kubernetes. Enable high-speed data processing to support real-time insights for content and user engagement. Ensure Operational Excellence in Data Systems: Set and enforce standards for data reliability, privacy, and performance. Define SLAs for production data processes, using monitoring tools (Grafana, Prometheus) to maintain system health and quickly resolve issues. Build and Mentor a High-Caliber Data Engineering Team: Recruit and lead a skilled team with strengths in distributed computing, cloud infrastructure, and data security. Foster a collaborative and innovative culture, focused on technical excellence and efficiency. Collaborate with Cross-Functional Teams: Partner closely with Data Scientists, Software Engineers, and Product Managers to deliver scalable data solutions for personalization algorithms, recommendation engines, and content analytics. Architect and Manage Production Data Models and Pipelines: Design and launch production-ready data models and pipelines capable of supporting millions of users. Utilize advanced storage and retrieval solutions like Hive, Presto, and BigQuery to ensure efficient data access. Drive Data Quality and Business Insights: Implement automated quality frameworks to maintain data accuracy and reliability. Oversee the creation of BI dashboards and data visualizations using tools like Tableau and Looker, providing actionable insights into user engagement and content performance. This role offers the opportunity to lead SonyLIV’s data engineering strategy, driving technological innovation and operational excellence while enabling data-driven decisions that shape the future of OTT entertainment. Minimum Qualifications: 15+ years of progressive experience in data engineering, business intelligence, and data warehousing, including significant expertise in high-volume, real-time data environments. Proven track record in building, scaling, and managing large data engineering teams (10+ members), including experience managing managers and guiding teams through complex data challenges. Demonstrated success in designing and implementing scalable data architectures, with hands-on experience using modern data technologies (e.g., Spark, Kafka, Redshift, Snowflake, BigQuery) for data ingestion, transformation, and storage. Advanced proficiency in SQL and experience with at least one object-oriented programming language (Python, Java, or similar) for custom data solutions and pipeline optimization. Strong experience in establishing and enforcing SLAs for data availability, accuracy, and latency, with a focus on data reliability and operational excellence. Extensive knowledge of A/B testing methodologies and statistical analysis, including a solid understanding of the application of these techniques for user engagement and content analytics in OTT environments. Skilled in data governance, data privacy, and compliance, with hands-on experience implementing security protocols and controls within large data ecosystems. Preferred Qualifications: Bachelor's or Master’s degree in Computer Science, Mathematics, Physics, or a related technical field. Experience managing the end-to-end data engineering lifecycle, from model design and data ingestion through to visualization and reporting. Experience working with large-scale infrastructure, including cloud data warehousing, distributed computing, and advanced storage solutions. Familiarity with automated data lineage and data auditing tools to streamline data governance and improve transparency. Expertise with BI and visualization tools (e.g., Tableau, Looker) and advanced processing frameworks (e.g., Hive, Presto) for managing high-volume data sets and delivering insights across the organization. Why join us? CulverMax Entertainment Pvt Ltd (Formerly known as Sony Pictures Networks India) is home to some of India’s leading entertainment channels such as SET, SAB, MAX, PAL, PIX, Sony BBC Earth, Yay!, Sony Marathi, Sony SIX, Sony TEN, SONY TEN1, SONY Ten2, SONY TEN3, SONY TEN4, to name a few! Our foray into the OTT space with one of the most promising streaming platforms, Sony LIV brings us one step closer to being a progressive digitally led content powerhouse. Our independent production venture- Studio Next has already made its mark with original content and IPs for TV and Digital Media. But our quest to Go Beyond doesn’t end there. Neither does our search to find people who can take us there. We focus on creating an inclusive and equitable workplace where we celebrate diversity with our Bring Your Own Self Philosophy. We strive to remain an ‘Employer of Choice’ and have been recognized as: - India’s Best Companies to Work For 2021 by the Great Place to Work® Institute. - 100 Best Companies for Women in India by AVTAR & Seramount for 6 years in a row - UN Women Empowerment Principles Award 2022 for Gender Responsive Marketplace and Community Engagement & Partnership - ET Human Capital Awards 2023 for Excellence in HR Business Partnership & Team Building Engagement - ET Future Skills Awards 2022 for Best Learning Culture in an Organization and Best D&I Learning Initiative. The biggest award of course is the thrill our employees feel when they can Tell Stories Beyond the Ordinary!
Posted 2 weeks ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join us at Seismic, a cutting-edge technology company leading the way in the SaaS industry. We specialize in delivering modern, scalable, and multi-cloud solutions that empower businesses to succeed in today's digital era. Leveraging the latest advancements in technology, including Generative AI, we are committed to driving innovation and transforming the way businesses operate. As we embark on an exciting journey of growth and expansion, we are seeking top engineering talent to join our AI team in Hyderabad, India. As an Engineer II, you will play a crucial role in developing and optimizing backend systems that power our web application, including content discovery, knowledge management, learning and coaching, meeting intelligence and various AI capabilities. You will collaborate with cross-functional teams to design, build, and maintain scalable, high-performance systems that deliver exceptional value to our customers. This position offers a unique opportunity to make a significant impact on our company's growth and success by contributing to the technical excellence and innovation of our software solutions. If you are a passionate technologist with a strong track record of building AI products, and you thrive in a fast-paced, innovative environment, we want to hear from you! Seismic AI AI is one of the fastest growing product areas in Seismic. We believe that AI, particularly Generative AI, will empower and transform how Enterprise sales and marketing organizations operate and interact with customers. Seismic Aura, our leading AI engine, is powering this change in the sales enablement space and is being infused across the Seismic enablement cloud. Our focus is to leverage AI across the Seismic platform to make our customers more productive and efficient in their day-to-day tasks, and to drive more successful sales outcomes. Why Join Us Opportunity to be a key technical leader in a rapidly growing company and drive innovation in the SaaS industry. Work with cutting-edge technologies and be at the forefront of AI advancements. Competitive compensation package, including salary, bonus, and equity options. A supportive, inclusive work culture. Professional development opportunities and career growth potential in a dynamic and collaborative environment. At Seismic, we’re committed to providing benefits and perks for the whole self. To explore our benefits available in each country, please visit the Global Benefits page. Please be aware we have noticed an increase in hiring scams potentially targeting Seismic candidates. Read our full statement on our Careers page. Seismic is the global leader in AI-powered enablement, empowering go-to-market leaders to drive strategic growth and deliver exceptional customer experiences at scale. The Seismic Enablement Cloud™ is the only unified AI-powered platform that prepares customer-facing teams with the skills, content, tools, and insights needed to maximize every buyer interaction and strengthen client relationships. Trusted by more than 2,000 organizations worldwide, Seismic helps businesses achieve measurable outcomes and accelerate revenue growth. Seismic is headquartered in San Diego with offices across North America, Europe, Asia and Australia. Learn more at seismic.com. Seismic is committed to building an inclusive workplace that ignites growth for our employees and creates a culture of belonging that allows all employees to be seen and valued for who they are. Learn more about DEI at Seismic here. Distributed Systems Development: Design, develop, and maintain backend systems and services for AI, information extraction or information retrieval functionality, ensuring high performance, scalability, and reliability. Integration: Collaborate with data scientists, AI engineers, and product teams to integrate AI-driven capabilities across the Seismic platform. Performance Tuning: Monitor and optimize service performance, addressing bottlenecks and ensuring low-latency query responses. Technical Leadership: Provide technical guidance and mentorship to junior engineers, promoting best practices in software backend development. Collaboration: Work closely with cross-functional and geographically distributed teams, including product managers, frontend engineers, and UX designers, to deliver seamless and intuitive experiences. Continuous Improvement: Stay updated with the latest trends and advancements in software and technologies, conducting research and experimentation to drive innovation. Experience: 2+ years of experience in software engineering and a proven track record of building and scaling microservices and working with data retrieval systems. Technical Expertise: Experience with C# and .NET, unit testing, object-oriented programming, and relational databases. Experience with Infrastructure as Code (Terraform, Pulumi, etc.), event driven architectures with tools like Kafka, feature management (Launch Darkly) is good to have. Front-end/full stack experience a plus. Cloud Expertise: Experience with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure. Knowledge of cloud-native services for AI/ML, data storage, and processing. Experience deploying containerized applications into Kubernetes is a plus. AI: Proficiency in building and deploying Generative AI use cases is a plus. Experience with Natural Language Processing (NLP). Semantic search with platforms like ElasticSearch is a plus. SaaS Knowledge: Extensive experience in SaaS application development and cloud technologies, with a deep understanding of modern distributed systems and cloud operational infrastructure. Product Development: Experience in collaborating with product management and design, with the ability to translate business requirements into technical solutions that drive successful delivery. Proven record of driving feature development from concept to launch. Education: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Fast-paced Environment: Experience working in a fast-paced, dynamic environment, preferably in a SaaS or technology-driven company. If you are an individual with a disability and would like to request a reasonable accommodation as part of the application or recruiting process, please click here. Headquartered in San Diego and with employees across the globe, Seismic is the global leader in sales enablement , backed by firms such as Permira, Ameriprise Financial, EDBI, Lightspeed Venture Partners, and T. Rowe Price. Seismic also expanded its team and product portfolio with the strategic acquisitions of SAVO, Percolate, Grapevine6, and Lessonly. Our board of directors is composed of several industry luminaries including John Thompson, former Chairman of the Board for Microsoft. Seismic is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to gender, age, race, religion, or any other classification which is protected by applicable law. Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Description About Sutherland Artificial Intelligence. Automation.Cloud engineering. Advanced analytics.For business leaders, these are key factors of success. For us, they’re our core expertise. We work with iconic brands worldwide. We bring them a unique value proposition through market-leading technology and business process excellence. We’ve created over 200 unique inventions under several patents across AI and other critical technologies. Leveraging our advanced products and platforms, we drive digital transformation, optimize critical business operations, reinvent experiences, and pioneer new solutions, all provided through a seamless “as a service” model. For each company, we provide new keys for their businesses, the people they work with, and the customers they serve. We tailor proven and rapid formulas, to fit their unique DNA.We bring together human expertise and artificial intelligence to develop digital chemistry. This unlocks new possibilities, transformative outcomes and enduring relationships. Sutherland Unlocking digital performance. Delivering measurable results. Job Description We are looking for a proactive and detail-oriented AI OPS Engineer to support the deployment, monitoring, and maintenance of AI/ML models in production. Reporting to the AI Developer, this role will focus on MLOps practices including model versioning, CI/CD, observability, and performance optimization in cloud and hybrid environments. Key Responsibilities: Build and manage CI/CD pipelines for ML models using platforms like MLflow, Kubeflow, or SageMaker. Monitor model performance and health using observability tools and dashboards. Ensure automated retraining, version control, rollback strategies, and audit logging for production models. Support deployment of LLMs, RAG pipelines, and agentic AI systems in scalable, containerized environments. Collaborate with AI Developers and Architects to ensure reliable and secure integration of models into enterprise systems. Troubleshoot runtime issues, latency, and accuracy drift in model predictions and APIs. Contribute to infrastructure automation using Terraform, Docker, Kubernetes, or similar technologies. Qualifications Required Qualifications: 3–5 years of experience in DevOps, MLOps, or platform engineering roles with exposure to AI/ML workflows. Hands-on experience with deployment tools like Jenkins, Argo, GitHub Actions, or Azure DevOps. Strong scripting skills (Python, Bash) and familiarity with cloud environments (AWS, Azure, GCP). Understanding of containerization, service orchestration, and monitoring tools (Prometheus, Grafana, ELK). Bachelor’s degree in computer science, IT, or a related field. Preferred Skills: Experience supporting GenAI or LLM applications in production. Familiarity with vector databases, model registries, and feature stores. Exposure to security and compliance standards in model lifecycle management
Posted 2 weeks ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Position: Staff Engineer - Data, Digital Business Role Overview - Role involves leading SonyLIV's data engineering strategy, architecting scalable data infrastructure, driving innovation in data processing, ensuring operational excellence, and fostering a high-performance team to enable data-driven insights for OTT content and user engagement. Location - Mumbai Experience - 8+ years Responsibilities: Define the Technical Vision for Scalable Data Infrastructure: Establish a robust technical strategy for SonyLIV’s data and analytics platform, architecting a scalable, high-performance data ecosystem using modern technologies like Spark, Kafka, Snowflake, and cloud services (AWS/GCP). Lead Innovation in Data Processing and Architecture: Advance SonyLIV’s data engineering practices by implementing real-time data processing, optimized ETL pipelines, and streaming analytics through tools like Apache Airflow, Spark, and Kubernetes. Enable high-speed data processing to support real-time insights for content and user engagement. Ensure Operational Excellence in Data Systems: Set and enforce standards for data reliability, privacy, and performance. Define SLAs for production data processes, using monitoring tools (Grafana, Prometheus) to maintain system health and quickly resolve issues. Build and Mentor a High-Caliber Data Engineering Team: Recruit and lead a skilled team with strengths in distributed computing, cloud infrastructure, and data security. Foster a collaborative and innovative culture, focused on technical excellence and efficiency. Collaborate with Cross-Functional Teams: Partner closely with Data Scientists, Software Engineers, and Product Managers to deliver scalable data solutions for personalization algorithms, recommendation engines, and content analytics. Architect and Manage Production Data Models and Pipelines: Design and launch production-ready data models and pipelines capable of supporting millions of users. Utilize advanced storage and retrieval solutions like Hive, Presto, and BigQuery to ensure efficient data access. Drive Data Quality and Business Insights: Implement automated quality frameworks to maintain data accuracy and reliability. Oversee the creation of BI dashboards and data visualizations using tools like Tableau and Looker, providing actionable insights into user engagement and content performance. This role offers the opportunity to lead SonyLIV’s data engineering strategy, driving technological innovation and operational excellence while enabling data-driven decisions that shape the future of OTT entertainment. Minimum Qualifications: 8+ years of progressive experience in data engineering, business intelligence, and data warehousing, including significant expertise in high-volume, real-time data environments. Proven track record in building, scaling, and managing large data engineering teams (10+ members), including experience managing managers and guiding teams through complex data challenges. Demonstrated success in designing and implementing scalable data architectures, with hands-on experience using modern data technologies (e.g., Spark, Kafka, Redshift, Snowflake, BigQuery) for data ingestion, transformation, and storage. Advanced proficiency in SQL and experience with at least one object-oriented programming language (Python, Java, or similar) for custom data solutions and pipeline optimization. Strong experience in establishing and enforcing SLAs for data availability, accuracy, and latency, with a focus on data reliability and operational excellence. Extensive knowledge of A/B testing methodologies and statistical analysis, including a solid understanding of the application of these techniques for user engagement and content analytics in OTT environments. Skilled in data governance, data privacy, and compliance, with hands-on experience implementing security protocols and controls within large data ecosystems. Preferred Qualifications: Bachelor's or master’s degree in computer science, Mathematics, Physics, or a related technical field. Experience managing the end-to-end data engineering lifecycle, from model design and data ingestion through to visualization and reporting. Experience working with large-scale infrastructure, including cloud data warehousing, distributed computing, and advanced storage solutions. Familiarity with automated data lineage and data auditing tools to streamline data governance and improve transparency. Expertise with BI and visualization tools (e.g., Tableau, Looker) and advanced processing frameworks (e.g., Hive, Presto) for managing high-volume data sets and delivering insights across the organization. Why SPNI? Join Our Team at SonyLIV Drive the Future of Data-Driven Entertainment Are you passionate about working with big data? Do you want to shape the direction of products that impact millions of users daily? If so, we want to connect with you. We’re seeking a leader for our Data Engineering team who will collaborate with Product Managers, Data Scientists, Software Engineers, and ML Engineers to support our AI infrastructure roadmap. In this role, you’ll design and implement the data architecture that guides decision-making and drives insights, directly impacting our platform’s growth and enriching user experiences. As a part of SonyLIV, you’ll work with some of the brightest minds in the industry, access one of the most comprehensive data sets in the world and leverage cutting-edge technology. Your contributions will have a tangible effect on the products we deliver and the viewers we engage. The ideal candidate will bring a strong foundation in data infrastructure and data architecture, a proven record of leading and scaling data teams, operational excellence to enhance efficiency an
Posted 2 weeks ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
WHAT MAKES US A GREAT PLACE TO WORK We are proud to be consistently recognized as one of the world’s best places to work. We are currently the #1 ranked consulting firm on Glassdoor’s Best Places to Work list and have maintained a spot in the top four on Glassdoor’s list since its founding in 2009. Extraordinary teams are at the heart of our business strategy, but these don’t happen by chance. They require intentional focus on bringing together a broad set of backgrounds, cultures, experiences, perspectives, and skills in a supportive and inclusive work environment. We hire people with exceptional talent and create an environment in which every individual can thrive professionally and personally. WHO YOU’LL WORK WITH You’ll join our Application Engineering experts within the AI, Insights & Solutions team. This team is part of Bain’s digital capabilities practice, which includes experts in analytics, engineering, product management, and design. In this multidisciplinary environment, you'll leverage deep technical expertise with business acumen to help clients tackle their most transformative challenges. You’ll work on integrated teams alongside our general consultants and clients to develop data-driven strategies and innovative solutions. Together, we create human-centric solutions that harness the power of data and artificial intelligence to drive competitive advantage for our clients. Our collaborative and supportive work environment fosters creativity and continuous learning, enabling us to consistently deliver exceptional results. WHAT YOU’LL DO Design, develop, and maintain cloud-based AI applications, leveraging a full-stack technology stack to deliver high-quality, scalable, and secure solutions. Collaborate with cross-functional teams, including product managers, data scientists, and other engineers, to define and implement analytics features and functionality that meet business requirements and user needs. Utilize Kubernetes and containerization technologies to deploy, manage, and scale analytics applications in cloud environments, ensuring optimal performance and availability. Develop and maintain APIs and microservices to expose analytics functionality to internal and external consumers, adhering to best practices for API design and documentation. Implement robust security measures to protect sensitive data and ensure compliance with data privacy regulations and organizational policies. Continuously monitor and troubleshoot application performance, identifying and resolving issues that impact system reliability, latency, and user experience. Participate in code reviews and contribute to the establishment and enforcement of coding standards and best practices to ensure high-quality, maintainable code. Stay current with emerging trends and technologies in cloud computing, data analytics, and software engineering, and proactively identify opportunities to enhance the capabilities of the analytics platform. Collaborate with DevOps and infrastructure teams to automate deployment and release processes, implement CI/CD pipelines, and optimize the development workflow for the analytics engineering team. Collaborate closely with and influence business consulting staff and leaders as part of multi-disciplinary teams to assess opportunities and develop analytics solutions for Bain clients across a variety of sectors. Influence, educate and directly support the analytics application engineering capabilities of our clients Travel is required (30%) ABOUT YOU Required Master’s degree in Computer Science, Engineering, or a related technical field. 6+ years at Senior or Staff level, or equivalent Experience with client-side technologies such as React, Angular, Vue.js, HTML and CSS Experience with server-side technologies such as, Django, Flask, Fast API Experience with cloud platforms and services (AWS, Azure, GCP) via Terraform Automation (good to have) 3+ years of Python expertise Use Git as your main tool for versioning and collaborating Experience with DevOps, CI/CD, Github Actions Demonstrated interest with LLMs, Prompt engineering, Langchain Experience with workflow orchestration - doesn’t matter if it’s dbt, Beam, Airflow, Luigy, Metaflow, Kubeflow, or any other Experience implementation of large-scale structured or unstructured databases, orchestration and container technologies such as Docker or Kubernetes Strong interpersonal and communication skills, including the ability to explain and discuss complex engineering technicalities with colleagues and clients from other disciplines at their level of cognition Curiosity, proactivity and critical thinking Strong computer science fundaments in data structures, algorithms, automated testing, object-oriented programming, performance complexity, and implications of computer architecture on software performance. Strong knowledge in designing API interfaces Knowledge of data architecture, database schema design and database scalability Agile development methodologies
Posted 2 weeks ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Seller Flex team located in Bangalore is looking for a SDE to deliver strategic goals for Amazon eCommerce systems. This is an opportunity to join our mission to build tech solutions that empower sellers to delight the next billion customers. You will be responsible for building new system capabilities grounds up for strategic business initiatives. If you feel excited by the challenge of setting the course for large company wide initiatives and building and launching customer facing products in international locales, this may be the next big career move for you. We are building systems which can scale across multiple marketplaces and automated large scale eCommerce business. We are looking for a SDE1 to design and build our tech stack as a coherent architecture and deliver capabilities across marketplaces. We operate in a high performance co-located agile ecosystem where SDEs, Product Managers and Principals frequently connect with end customers of our products. Our SDEs stay connected with customers through seller/FC/Delivery Station visits and customer anecdotes. This allows our engineers to significantly influence product roadmap, contribute to PRFAQs and create disproportionate impact through the tech they deliver. We offer Technology leaders a once in a lifetime opportunity to transform billions of lives across the planet through their tech innovations. In this role, you will have front row seats to how we are disrupting eCommerce fulfilment and supply chain by offering creative solutions to yet – unsolved problems. We operate like a start-up within the Amazon ecosystem and have proven track record of delivering inventions that work globally. You will be challenged to look at the world through the eyes of our seller customers and think outside the box to build new tech solutions to make our Sellers successful. You will often find yourself building products and services that are new to Amazon and will have an opportunity to pioneer not just the technology components but the idea itself across other Amazon teams and markets. See Below For a Couple Of Anecdotes, Should You Want To Hear a SDEʼs Perspective On What It Is Like To Work In This Team “I have worked on other global tech platforms at Amazon prior to SellerFlex and what I find extremely different and satisfying here is that in addition to the scale and complexity of work that I do and the customer impact it has, I am part of a team that makes SDEs owners of critical aspects of team functioning – whether it be designing and running engineering excellence programs for design reviews, COE, CR, MCM and Service launch bar raisers or the operational programs for the team. This has allowed me to develop myself not just on the tech or domain as a SDE but also as a wholesome Amazon tech leader for future challenges.” “It is extremely empowering to be a part of this team where I am challenged to learn and innovate in every project that I work on. I get to work across the tech stack and have end to end ownership of solution and tech choices. I hadnʼt worked on as many services in my previous team at Amazon as I have built from scratch, launched and scaled in this team . The team is in a great place where it is connected to customers closely, is building new stuff from scratch and has to deal with very light Ops burden due to the great architecture and design choices that are being made by SDEs” KEY REPONSIBILITIES Work closely with senior and principal engineers to architect and deliver high quality technology solutions Own development in multiple layers of the stack including distributed workflows hosted in native AWS architecture Operational rigor for a rapidly growing tech stack Contribute to patents, tech talks and innovation drives Assist in the continual hiring and development of technical talent Measure success metrics and influence evolution of the tech product Loop Competencies Basic Qualifications Bachelorʼs degree or higher in Computer Science and 1+ years of Software Development experience Proven track record of building large-scale, highly available, low latency, high quality distributed systems and software products Possess an extremely sound understanding of basic areas of Computer Science such as Algorithms, Data Structures, Object Oriented Design, Databases. Good understanding of AWS services such as EC2, S3, DynamoDB, Elasticsearch, Lambda, API Gateway, ECR, ECS, Lex etc. Excellent coding skills in an object oriented language such as Java and Scala Great problem solving skills and propensity to learn and develop tech talent Basic Qualifications 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience 3+ years of computer science fundamentals (object-oriented design, data structures, algorithm design, problem solving and complexity analysis) experience Experience programming with at least one software programming language Preferred Qualifications 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2980587
Posted 2 weeks ago
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Designation: Associate Vice President - Android, Digital Business (12+ years) Location: Gurugram / Bangalore About the Role Sony LIV is on a mission to deliver a world-class streaming experience to millions of users across devices. We’re looking for an experienced Android engineer with deep expertise in ExoPlayer/Media3 and strong command over the Android framework, who thrives in a hybrid role — leading cross-functional initiatives while also diving deep into code to deliver high-performance, scalable solutions. This is a unique opportunity to own critical areas of media playback and performance, contribute to architecture decisions, and drive collaboration across engineering, product, and QA. What You’ll Do Lead the design and implementation of advanced media playback workflows using ExoPlayer (Media3), ensuring low latency, seamless buffering, adaptive streaming, and DRM integrations. Drive performance improvements across app launch, playback, memory, and battery. Collaborate closely with product managers, iOS/web counterparts, backend, and QA to build delightful and robust video experiences. Mentor and guide a team of Android engineers — promote clean architecture, code quality, and modern development practices. Contribute individually to high-priority feature development and performance debugging. Stay ahead of Android platform updates and integrate Jetpack libraries, modern UI frameworks, and best practices (e.g., Kotlin Coroutines, Hilt, Jetpack Compose, Paging, etc.) Own cross-functional technical discussions for media strategy, caching, telemetry, offline, or A/V compliance. What We’re Looking For 10–15 years of Android development experience with strong fundamentals in Kotlin, ExoPlayer/Media3, and the Android media framework. Deep understanding of streaming protocols (HLS/DASH), adaptive bitrate streaming, DRM (Widevine), and analytics tagging. Experience in performance optimization – memory, power, cold start, and playback smoothness. Hands-on with modern Android stack: Jetpack Compose, Kotlin Flows, Work Manager, ViewModel, Room, Hilt/Dagger, etc. Familiarity with CI/CD, app modularization, crash analytics, and A/B experimentation frameworks (e.g., Firebase, AppCenter, etc.). Comfortable navigating ambiguity — can switch gears between IC and leadership responsibilities based on the team’s needs. Strong communication skills and ability to collaborate across teams and functions. Nice to Have Experience with Android TV / Fire TV or other large-screen form factors. Prior work on live streaming, low latency playback, or sports content. Familiarity with AV1, Dolby Vision/Atmos, or advanced video/audio codecs. Contributions to open-source media libraries or ExoPlayer itself. Why Sony? Sony Pictures Networks is home to some of India’s leading entertainment channels such as SET, SAB, MAX, PAL, PIX, Sony BBC Earth, Yay!, Sony Marathi, Sony SIX, Sony TEN, Sony TEN1, SONY Ten2, SONY TEN3, SONY TEN4, to name a few! Our foray into the OTT space with one of the most promising streaming platforms, Sony LIV brings us one step closer to being a progressive digitally-led content powerhouse. Our independent production venture- Studio Next has already made its mark with original content and IPs for TV and Digital Media. But our quest to Go Beyond doesn’t end there. Neither does our search to find people who can take us there. We focus on creating an inclusive and equitable workplace where we celebrate diversity with our Bring Your Own Self Philosophy and are recognised as a Great Place to Work. - Great Place to Work Institute- Ranked as one of the Great Places to Work for since 5 years - Included in the Hall of Fame as a part of the Working Mother & Avtar Best Companies for Women in India study- Ranked amongst 100 Best Companies for Women In India - ET Human Capital Awards 2021- Winner across multiple categories - Brandon Hall Group HCM Excellence Award - Outstanding Learning Practices. The biggest award of course is the thrill our employees feel when they can Tell Stories Beyond the Ordinary!
Posted 2 weeks ago
15.0 years
0 Lacs
India
Remote
About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: AWS Cloud Architecture Experience: 15+ Years Mandatory Skills ✔ 15+ years in Java Full Stack (Spring Boot, Microservices, ReactJS) ✔ Cloud Architecture: AWS EKS, Kubernetes, API Gateway (APIGEE/Tyk) ✔ Event Streaming: Kafka, RabbitMQ ✔ Database Mastery: PostgreSQL (performance tuning, scaling) ✔ DevOps: GitLab CI/CD, Terraform, Grafana/Prometheus ✔ Leadership: Technical mentoring, decision-making About the Role We are seeking a highly experienced AWS Cloud Architect with 15+ years of expertise in full-stack Java development , cloud-native architecture, and large-scale distributed systems. The ideal candidate will be a technical leader capable of designing, implementing, and optimizing high-performance cloud applications across on-premise and multi-cloud environments (AWS). This role requires deep hands-on skills in Java, Microservices, Kubernetes, Kafka, and observability tools, along with a strong architectural mindset to drive innovation and mentor engineering teams. Key Responsibilities ✅ Cloud-Native Architecture & Leadership: Lead the design, development, and deployment of scalable, fault-tolerant cloud applications (AWS EKS, Kubernetes, Serverless). Define best practices for microservices, event-driven architecture (Kafka), and API management (APIGEE/Tyk). Architect hybrid cloud solutions (on-premise + AWS/GCP) with security, cost optimization, and high availability. ✅ Full-Stack Development: Develop backend services using Java, Spring Boot, and PostgreSQL (performance tuning, indexing, replication). Build modern frontends with ReactJS (state management, performance optimization). Design REST/gRPC APIs and event-driven systems (Kafka, SQS). ✅ DevOps & Observability: Manage Kubernetes (EKS) clusters, Helm charts, and GitLab CI/CD pipelines. Implement Infrastructure as Code (IaC) using Terraform/CloudFormation. Set up monitoring (Grafana, Prometheus), logging (ELK), and alerting for production systems. ✅ Database & Performance Engineering: Optimize PostgreSQL for high throughput, replication, and low-latency queries. Troubleshoot database bottlenecks, caching (Redis), and connection pooling. Design data migration strategies (on-premise → cloud). ✅ Mentorship & Innovation: Mentor junior engineers and conduct architecture reviews. Drive POCs on emerging tech (Service Mesh, Serverless, AI/ML integrations). Collaborate with CTO/Architects on long-term technical roadmaps.
Posted 2 weeks ago
5.0 years
5 - 7 Lacs
Thiruvananthapuram
On-site
5 - 7 Years 1 Opening Trivandrum Role description Role Proficiency: Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities Outcomes: Interpret the application/feature/component design to develop the same in accordance with specifications. Code debug test document and communicate product/component/feature development stages. Validate results with user representatives; integrates and commissions the overall solution Select appropriate technical options for development such as reusing improving or reconfiguration of existing components or creating own solutions Optimises efficiency cost and quality. Influence and improve customer satisfaction Set FAST goals for self/team; provide feedback to FAST goals of team members Measures of Outcomes: Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues On time completion of mandatory compliance trainings Outputs Expected: Code: Code as per design Follow coding standards templates and checklists Review code – for team and peers Documentation: Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation r and requirements test cases/results Configure: Define and govern configuration management plan Ensure compliance from the team Test: Review and create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain relevance: Advise Software Developers on design and development of features and components with a deep understanding of the business problem being addressed for the client. Learn more about the customer domain identifying opportunities to provide valuable addition to customers Complete relevant domain certifications Manage Project: Manage delivery of modules and/or manage user stories Manage Defects: Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate: Create and provide input for effort estimation for projects Manage knowledge: Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release: Execute and monitor release process Design: Contribute to creation of design (HLD LLD SAD)/architecture for Applications/Features/Business Components/Data Models Interface with Customer: Clarify requirements and provide guidance to development team Present design options to customers Conduct product demos Manage Team: Set FAST goals and provide feedback Understand aspirations of team members and provide guidance opportunities etc Ensure team is engaged in project Certifications: Take relevant domain/technology certification Skill Examples: Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort required for developing / debugging features / components Perform and evaluate test in the customer or target environment Make quick decisions on technical/project related challenges Manage a Team mentor and handle people related issues in team Maintain high motivation levels and positive dynamics in the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback to team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers addressing customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks. Build confidence with customers by meeting the deliverables on time with quality. Estimate time and effort resources required for developing / debugging features / components Make on appropriate utilization of Software / Hardware’s. Strong analytical and problem-solving abilities Knowledge Examples: Appropriate software programs / modules Functional and technical designing Programming languages – proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile – Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Knowledge of customer domain and deep understanding of sub domain where problem is solved Additional Comments: Design, build, and maintain robust, reactive REST APIs using Spring WebFlux and Spring Boot Develop and optimize microservices that handle high throughput and low latency Write clean, testable, maintainable code in Java Integrate with MongoDB for CRUD operations, aggregation pipelines, and indexing strategies Apply best practices in API security, versioning, error handling, and documentation Collaborate with front-end developers, DevOps, QA, and product teams Troubleshoot and debug production issues, identify root causes, and deploy fixes quickly Required Skills & Experience: Strong programming experience in Java 17+ Proficiency in Spring Boot, Spring WebFlux, and Spring MVC Solid understanding of Reactive Programming principles Proven experience designing and implementing microservices architecture Hands-on expertise with MongoDB, including schema design and performance tuning Experience with RESTful API design and HTTP fundamentals Working knowledge of build tools like Maven or Gradle Good grasp of CI/CD pipelines and deployment strategies Skills Spring Webflux,Spring Boot,Kafka About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderābād
On-site
Who We Are At Goldman Sachs, we connect people, capital and ideas to help solve problems for our clients. We are a leading global financial services firm providing investment banking, securities and investment management services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. How will you fulfill your potential Work with a global team of highly motivated platform engineers and software developers building integrated architectures for secure, scalable infrastructure services serving a diverse set of use cases. Partner with colleagues from across technology and risk to ensure an outstanding platform is delivered. Help to provide frictionless integration with the firm’s runtime, deployment and SDLC technologies. Collaborate on feature design and problem solving. Help to ensure reliability, define, measure, and meet service level objectives. Quality coding & integration, testing, release, and demise of software products supporting AWM functions. Engage in quality assurance and production troubleshooting. Help to communicate and promote best practices for software engineering across the Asset Management tech stack. Basic Qualifications A strong grounding in software engineering concepts and implementation of architecture design patterns. A good understanding of multiple aspects of software development in microservices architecture, full stack development experience, Identity / access management and technology risk. Sound SDLC and practices and tooling experience - version control, CI/CD and configuration management tools. Ability to communicate technical concepts effectively, both written and orally, as well as interpersonal skills required to collaborate effectively with colleagues across diverse technology teams. Experience meeting demands for high availability and scalable system requirements. Ability to reason about performance, security, and process interactions in complex distributed systems. Ability to understand and effectively debug both new and existing software. Experience with metrics and monitoring tooling, including the ability to use metrics to rationally derive system health and availability information. Experience in auditing and supporting software based on sound SRE principles. Preferred Qualifications 3+ Years of Experience using and/or supporting Java based frameworks & SQL / NOSQL data stores. Experience with deploying software to containerized environments - Kubernetes/Docker. Scripting skills using Python, Shell or bash. Experience with Terraform or similar infrastructure-as-code platforms. Experience building services using public cloud providers such as AWS, Azure or GCP. Goldman Sachs Engineering Culture At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here! © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity.
Posted 2 weeks ago
2.0 years
1 - 6 Lacs
Hyderābād
Remote
Software Engineer Hyderabad, Telangana, India Date posted Jul 17, 2025 Job number 1832398 Work site Up to 50% work from home Travel 0-25 % Role type Individual Contributor Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers further. This is a world of more possibilities, more innovation, more openness, and the sky is the limit thinking in a cloud-enabled world. Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. Within Azure Data, the data integration team builds data gravity on the Microsoft Cloud. Massive volumes of data are generated – not just from transactional systems of record, but also from the world around us. Our data integration products – Azure Data Factory and Power Query make it easy for customers to bring in, clean, shape, and join data, to extract intelligence. The Fabric Data Integration team is currently seeking a Software Engineer to join their team. This team is in charge of designing, building, and operating a next generation service that transfers large volumes of data from various source systems to target systems with minimal latency while providing a data centric orchestration platform. The team focuses on advanced data movement/replication scenarios while maintaining user-friendly interfaces. Working collaboratively, the team utilizes a range of technologies to deliver high-quality products at a fast pace. We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served. Qualifications Required /Minimum Qualifications Bachelor's degree in computer science, or related technical discipline AND 2+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java, JavaScript, or Python Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Preferred/Additional Qualifications Bachelor's Degree in Computer Science or related technical field AND 1+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java OR master’s degree in computer science or related technical field AND 1+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C#, Java OR equivalent experience. 1+ years of experience in developing and shipping system level features in an enterprise production backend server system. Experience building Distributed Systems with reliable guarantees. Understanding of data structures, algorithms, and distributed systems. Solve problems by always leading with passion and empathy for customers. Have a desire to work collaboratively, solve problems with groups, find win/win solutions and celebrate successes. Enthusiasm, integrity, self-discipline, results-orientation in a fast-paced environment. 1+ years of experience in developing and shipping system level features in an enterprise production backend server system. 1+ years of experience building and supporting distributed cloud services with production grade. #azdat #azuredata #azdataintegration Responsibilities Build cloud scale products with focus on efficiency, reliability and security. Build and maintain end-to-end Build, Test and Deployment pipelines. Deploy and manage massive Hadoop, Spark and other clusters. Contribute to the architecture & design of the products. Triaging issues and implementing solutions to restore service with minimal disruption to the customer and business. Perform root cause analysis, trend analysis and post-mortems. Owning the components and driving them end to end, all the way from gathering requirements, development, testing, deployment to ensuring high quality and availability post deployment. Embody our culture and values Embody our culture and values Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 2 weeks ago
0 years
0 Lacs
India
Remote
Healthcare AI Fellowship (Engineering / Data Science) Full-time contract, 6 months (Remote) About iksa.ai Iksa is the AI operating layer for innovative Life Sciences organizations and Specialty Healthcare Providers to customize privacy-first agentic systems. Our leadership and core engineers have shipped together for years and keep the team deliberately small. What you’ll do Design, train and iterate on agentic pipelines Build evaluation frameworks and tooling adopted across the engineering group Collaborate daily with physicians, ML engineers and data architects Document and share technical insights internally (and publicly when appropriate) Must-have Strong Python plus mastery of at least one ML framework (PyTorch, TensorFlow, JAX, etc.) Demonstrated experience shipping applied AI or data-intensive products Proficiency in architecting, evaluating and deploying AI systems end-to-end Ownership mindset and comfort with rapid iteration Nice-to-have Prior exposure to healthcare data, standards or compliance constraints Familiarity with retrieval-augmented generation, property-graph modeling, vector databases, or agent orchestration Success in six months A production-grade multi-agent pipeline integrated into our core platform Reusable tooling or documentation adopted by the wider team Measurable gains in system performance, latency, or reliability Why iksa.ai Direct mentorship from seasoned healthcare and AI leaders Accelerated growth in a high-engagement, low-bureaucracy environment Influence over product direction and visibility across international roll-outs Apply now with your CV and a short note on a recent AI project you’re proud of. We review applications on a rolling basis and will reach out if there’s a strong fit.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France