Home
Jobs

4114 Retrieval Jobs - Page 33

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Data Scientist – AIML, GenAI & Agentic AI Location: Pune/ Bangalore/ Indore/ Kolkata Job Type: Full-time Experience Level: 4+ Years NP : Immediate Joiner OR15 Days Max Job Description We are seeking a highly skilled and innovative Data Scientist / AI Engineer with deep expertise in AI/ML, Generative AI, and Agentic AI frameworks to join our advanced analytics and AI team. The ideal candidate will possess a robust background in data science and machine learning, along with hands-on experience in building and deploying end-to-end intelligent systems using modern AI technologies including RAG (Retrieval-Augmented Generation), LLMs, and agent orchestration tools. Key Responsibilities Design, build, and deploy machine learning models and Generative AI solutions for a wide range of use cases (text, vision, and tabular data). Develop and maintain AI/ML pipelines for large-scale training and inference in production environments. Leverage frameworks such as LangChain, LangGraph, CrewAI for building Agentic AI workflows. Fine-tune and prompt-engineer LLMs (e.g., GPT, BERT) for enterprise-grade RAG and NLP solutions. Collaborate with business and engineering teams to translate business problems into AI/ML models that deliver measurable value. Apply advanced analytics techniques such as regression, classification, clustering, sequence modeling, association rules, computer vision, and NLP. Architect and implement scalable AI solutions using Python , PyTorch , TensorFlow , and cloud-native technologies. Ensure integration of AI solutions within existing enterprise architecture using containerized services and orchestration (e.g., Docker, Kubernetes). Maintain documentation and present insights and technical findings to stakeholders. Required Skills and Qualifications Bachelor’s/Master’s/PhD in Computer Science, Data Science, Statistics, or related field. Strong proficiency in Python and libraries such as Pandas, NumPy, Scikit-learn, etc. Extensive experience with deep learning frameworks : PyTorch and TensorFlow. Proven experience with Generative AI , LLMs , RAG , BERT , and related architectures. Familiarity with LangChain , LangGraph , and CrewAI and strong knowledge of agent orchestration and autonomous workflows. Experience with large-scale ML pipelines , MLOps practices, and cloud platforms (AWS, GCP, or Azure). Deep understanding of software engineering principles , design patterns, and enterprise architecture. Strong problem-solving, analytical thinking, and debugging skills. Excellent communication, presentation, and cross-functional collaboration abilities. Preferred Qualifications Experience in fine-tuning LLMs and optimizing prompt engineering techniques. Publications, open-source contributions, or patents in AI/ML/NLP/GenAI. Experience with vector databases and tools such as Pinecone, FAISS, Weaviate, or Milvus. Why Join Us? Work on cutting-edge AI/ML and GenAI innovations. Collaborate with top-tier scientists, engineers, and product teams. Opportunity to shape the next generation of intelligent agents and enterprise AI solutions. Flexible work arrangements and continuous learning culture. To Apply: Please submit your resume and portfolio of relevant AI/ML work (e.g., GitHub, papers, demos) to Shanti.upase@calsoftinc.com

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

Role Summary We’re hiring a Founding Full-Stack AI/ML Engineer to help build and scale the backbone of our AI system. You’ll lead development across agent orchestration, tool execution, Model Context Protocol (MCP), API integration, and browser-based research workflows. You’ll work closely with the founder on hands-on roadmap development, rapid prototyping, and fast iteration cycles to evolve the product quickly based on real user needs. Responsibilities Build multi-agent systems capable of reasoning, tool use, and autonomous action Implement Model Context Protocol (MCP) strategies to manage complex, multi-source context Integrate third-party APIs (e.g., Crunchbase, PitchBook, CB Insights), scraping APIs, and data aggregators Develop browser-based agents enhanced with computer vision for dynamic research, scraping, and web interaction Optimize inference pipelines, task planning, and system performance Collaborate on architecture, prototyping, and iterative development Experiment with prompt chaining, tool calling, embeddings, and vector search Requirements 5+ years of experience in software engineering or AI/ML development Strong Python skills and experience with LangChain, LlamaIndex, or agentic frameworks Proven experience with multi-agent systems, tool calling, or task planning agents Familiarity with Model Context Protocol (MCP), Retrieval-Augmented Generation (RAG), and multi-modal context handling Experience with browser automation frameworks (e.g., Playwright, Puppeteer, Selenium) Cloud deployment and systems engineering experience (GCP, AWS, etc.) Self-starter attitude with strong product sense and iteration speed Bonus Points Experience with AutoGen, CrewAI, OpenAgents, or ReAct-style frameworks Background in building AI systems that blend structured and unstructured data Experience working in a fast-paced startup environment Previous startup or technical founding team experience This is a unique opportunity to work directly with an industry leader in AI to build a cutting-edge, next-generation AI system from the ground up.

Posted 1 week ago

Apply

8.0 years

0 Lacs

India

Remote

Job Title: Sr. Gen AI Engineer Experience: 8 + years Employment Type: Contract Location: Remote Timings : 6 PM - 2 AM IST Job Summary: We are looking for a Senior Generative AI Engineer who will be responsible for developing, fine-tuning, and deploying Generative AI solutions tailored to business needs. This role requires hands-on expertise in Gen AI development, a strong understanding of Azure , and the flexibility to adapt to evolving AI technologies. This role requires a blend of technical expertise, business acumen, and adaptability, ensuring that Generative AI solutions deliver real impact while staying aligned with evolving business needs. Key Responsibilities Develop and deploy Generative AI solutions using OpenAI, Hugging Face, and Llama APIs to address business challenges Proficient in API-first development, with experience using development platforms like VS Code and API testing tools such as Postman and Swagger. Strong expertise in relational databases and SQL, with excellent Python skills and a solid understanding of JavaScript. Hands-on experience with Azure platform, including Azure Blob Storage, Azure AI Search, Azure Data Factory, and Azure AI Foundry. Expertise in NLP, prompt engineering, fine-tuning, AI-powered chatbots, RAG (Retrieval Augmented Generation), vector databases, and embeddings. Collaborate with business stakeholders and cross-functional teams to align AI solutions with business objectives and handle cross-functional dependencies. Translate complex technical concepts into business-friendly insights and provide actionable recommendations to non-technical stakeholders. Strong communication and stakeholder management skills to effectively gather necessary information and drive AI initiatives forward. Should be a strong team player, actively supporting colleagues and fostering a collaborative, positive team environment. A proactive attitude, good judgment, and common sense in decision making and interactions are essential. Work on relationship building and should be able to position the team in good terms with business and other teams

Posted 1 week ago

Apply

0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

About: Sahajanand Laser Technology Limited. A renowned name, Sahajanand Laser Technology Ltd. situated at Gandhinagar, Gujarat is a Pioneer in the manufacturing of laser marking & and engraving, laser cutting, laser welding, and solar cell scribing / micro-machining systems in the industrial segment. Fiber laser marking system with automation like Laser Marking for Bearing, Laser Marking for Piston rings, Laser Marking for Valves, Laser Marking for Nozzles, and Laser Marking for jewelry. Kindly go through our websites mentioned below for further details. Website: http://www.sltl.com/. Job Description Designation Store Manager. Department :Store. Location :Gandhinagar. Experience 8 to 15 - Implement and maintain effective inventory control procedures to ensure accurate stock levels. Monitor stock movement, conduct regular stock checks, and reconcile variances promptly. Coordinate with production and procurement teams to ensure timely and accurate fulfillment of customer orders. Manage order processing, packaging, and shipping activities to meet customer expectations. Optimize the layout of the warehouse for maximum efficiency in storage and retrieval of goods. Oversee the cleanliness and safety of the warehouse environment. Lead and motivate a team of store personnel, providing guidance and support. Conduct regular performance reviews and training sessions to enhance team capabilities. Build and maintain strong relationships with suppliers and vendors. Negotiate terms and agreements to ensure favorable terms for the company. Generate regular reports on inventory levels, order fulfillment, and warehouse performance. Analyze data to identify trends and areas for improvement, implementing solutions - To announce audit twice in a week. To coordinate with all auditees and auditors for timely completion of the planned audit. To provide last audit & MRM points to auditors to conduct current audit. To coordinate with all auditees and auditors for timely closure of non-conformities. To prepare report of nonconformance and present in MRM. To derive strategy for effective implementation of QMS from the NC report. To keep records of all audits. To plan and announce MRM twice in a year. To prepare agenda of the MRM and circulate to all HoDs. To define and provide MRM presentation template to all HoDs. To co-ordinate with all HoDs for MRM presentation and to make a MoM of MRM. To circulate the MoM of MRM to all HoDs and to ensure the timely execution of all MRM points. To present pending MRM points to management every quarter and each MRM. To monitor quality objectives of all departments on monthly basis and to present the overall achievement of the objective to management every month. (ref:iimjobs.com)

Posted 1 week ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Minimum qualifications: Bachelor's degree or equivalent practical experience. 2 years of experience with software development in one or more programming languages, or 1 year of experience with an advanced degree. 2 years of experience with data structures or algorithms. 1 year of experience with ML infrastructure (e.g., model deployment, model evaluation, optimization, data processing, debugging). 1 year of experience with GenAI concepts (Large Language Model, Multi-Modal, Large Vision Models) and experience with text, image, video, or audio generation. Preferred qualifications: Master's degree or PhD in Computer Science or related technical fields. Experience developing accessible technologies. Experience in Machine Learning and in Generative AI. Experience in large scale data systems. Experience with Python, Notebooks, ML Frameworks (e.g. Tensorflow). About The Job Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Explore and execute on GenAI technology. Take initiative, and be adept at navigating ambiguity. Facilitate alignment within the team. Manage project priorities, deadlines, and deliverables. Design, develop, test, deploy, maintain, and enhance large scale software solutions. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

Remote

If you haven’t built and scaled a 0-1 product to at least 7-figure (USD) ARR and/or been part of a YC (or equivalent startup) founding engineer team we kindly ask that you don't apply for this role. Our client is looking for a hands-on engineer who dreams about LLM solutions, keeps up with the latest AI research for fun, and has previously led technical early-stage startups in a YC batch (or an equivalent high-growth environment). About Us Our company is an innovative startup revolutionizing how businesses operate through AI and automation. With a focus on efficiency and scalability, we are building the future of intelligent workflows. To drive this mission, we are seeking a highly skilled Technical Chief of Staf f. This role is for someone deeply embedded in the startup and AI ecosystem, passionate about building AI-powered products from the ground up and integrating them into real-world applications . Role Overview As the Technical Chief of Staff, you will report directly to the CEO and play a critical role in shaping the company’s technical direction. You will build AI-powered applications end-to-end, ensuring seamless integration with existing business processes. This role requires a blend of hands-on engineering, AI research, and strategic thinking to develop scalable solutions that align with company goals. You will collaborate across departments, identifying pain points and implementing AI-driven automation to drive efficiency and innovation. This is a fully remote position. Key Responsibilities 1. AI Product Development & Execution Design, develop, and deploy AI-native applications integrating LLMs and automation. Build and maintain full-stack applications that leverage AI models for decision-making and workflow optimization. Develop and scale agentic AI workflows that automate complex business processes. Continuously refine AI-driven products to improve efficiency, usability, and impact. 2. Strategic Leadership & Cross-Functional Collaboration Partner with teams across the company to identify business challenges and design AI-driven solutions. Bridge technical execution and strategic decision-making to ensure AI initiatives align with company objectives. Develop data-driven strategies to measure and optimize the effectiveness of AI implementations. 3. AI Research & Scalability Stay engaged with the latest AI advancements, including LLMs, multi-agent systems, and emerging frameworks. Architect and deploy scalable AI infrastructure, ensuring efficiency in high-growth environments. Optimize AI models for performance, accuracy, and real-world application. 4. Industry Engagement & Thought Leadership Apply insights from thought leaders like Paul Graham, A16Z, and Sequoia Capital to inform technical and strategic initiatives. Serve as an advisor to leadership on trends in AI, automation, and product scalability. Contribute to the company’s AI thought leadership through research, whitepapers, or technical discussions. 5. Team Enablement & AI Integration Guide and mentor engineering teams on AI adoption, architecture, and optimization. Promote a culture of AI-driven experimentation and continuous learning. Train internal teams on leveraging AI-assisted coding tools such as GitHub Copilot, Claude, and Replit. Who You Are You have a strong track record of building and deploying AI-powered applications end-to-end. You have worked in high-growth startup environments and understand how to ship AI products at speed. You are a strategic thinker who can apply AI to solve real business challenges. You thrive in cross-functional collaboration, translating technical concepts for product, design, and business teams. You stay ahead of the curve in AI research and actively experiment with new tools and methodologies. Qualifications a. Technical Expertise 5+ years of experience in full-stack engineering or AI-focused software development. Strong proficiency in Python, JavaScript, React, or equivalent languages. Expertise in LLMs, AI models, and tools such as OpenAI, Hugging Face, LangChain, and vector databases. Experience integrating AI models into production environments, including APIs, fine-tuning, and retrieval-augmented generation (RAG). Proven ability to scale AI-powered products while optimizing for performance and usability. Familiarity with AI-assisted coding tools such as GitHub Copilot, Claude, and Replit. b. Startup & Industry KnowledgeExperience successfully building and scaling AI-powered products in early-stage startups. Strong understanding of startup challenges, trends, and growth strategies. Deep familiarity with thought leaders in AI and startups (e.g., A16Z, Paul Graham,Sequoia). c. Soft Skills Strong communication and collaboration abilities. Excellent organizational and project management skills. A growth mindset with a passion for AI-driven innovation. What We Offer The opportunity to build and scale AI-powered products in a fast-moving startup environment. A chance to shape the future of AI-driven automation and digital workforces. A collaborative, high-velocity culture that values experimentation and impact. Fully remote work with flexible arrangements. How to Apply Please submit your resume, portfolio, and a brief cover letter detailing: Your experience with AI, LLMs, and automation workflows. Examples of AI products you’ve built and scaled end-to-end. Your insights into the startup ecosystem and how they inform your approach to AI-driven product development. If you are passionate about building AI-native products and driving innovation in a high-growth startup, we’d love to hear from you!

Posted 1 week ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Minimum qualifications: Bachelor’s degree or equivalent practical experience. 2 years of experience with software development in one or more programming languages, or 1 year of experience with an advanced degree. 2 years of experience with data structures or algorithms in either an academic or industry setting. 2 years of experience with full stack development, across back-end such as Java, Python, GO, or C++ codebases, and front-end experience including JavaScript or TypeScript, HTML, CSS or equivalent. Preferred qualifications: Master's degree or PhD in Computer Science or related technical field. 2 years of experience with performance, systems data analysis, visualization tools, or debugging. Experience in code and system health, diagnosis and resolution, and software test engineering. Experience developing accessible technologies. About The Job Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. In this role, you will manage project priorities, deadlines, and deliverables. You will design, develop, test, deploy, maintain, and enhance software solutions. The Geo team is focused on building the most accurate, comprehensive, and useful maps for our users, through products like Maps, Earth, Street View, Google Maps Platform, and more. Every month, more than a billion people rely on Maps services to explore the world and navigate their daily lives. The Geo team also enables developers to use the power of Google Maps platforms to enhance their apps and websites. As they plot a course for the future of mapping, they are solving complex computer science problems, designing beautiful and intuitive product experiences, and improving our understanding of the real world. Responsibilities Write product or system development code. Participate in, or lead design reviews with peers and stakeholders to decide amongst available technologies. Review code developed by other developers and provide feedback to ensure best practices (e.g., style guidelines, checking code in, accuracy, testability, and efficiency). Contribute to existing documentation or educational content and adapt content based on product/program updates and user feedback. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on hardware, network, or service operations and quality. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Bengaluru, Karnataka, India; Hyderabad, Telangana, India . Minimum qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, a related field, or equivalent practical experience. Experience in distributed data processing frameworks and modern age Google Cloud Platform (GCP) analytical and transactional data stores like BigQuery, CloudSQL, AlloyDB etc, and experience in one Database type to write SQLs. Experience in distributed data processing frameworks and modern age GCP analytical and transactional data stores like BigQuery, CloudSQL, AlloyDB etc, and experience in one Database type to write SQLs. Experience in GCP. Preferred qualifications: Experience in working with/on data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools, environments, and data structures. Experience with encryption techniques like symmetric, asymmetric, HSMs, and envelop, and ability to implement secure key storage using Key Management System. Experience in building multi-tier, high availability applications with modern technologies such as NoSQL, MongoDB, SparkML, and TensorFlow. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments. Experience in Big Data, information retrieval, data mining, or Machine Learning. Experience with IaC and CICD tools like Terraform, Ansible, Jenkins etc. About The Job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform. You will work on data migrations and modernization projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product tests. You will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. Additionally, you will work with Product Management and Product Engineering teams to build and drive excellence in our products. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Interact with stakeholders to translate customer requirements into recommendations for appropriate solution architectures and advisory services. Engage with technical leads, and partners to lead high velocity migration and modernization to Google Cloud Platform (GCP). Design, Migrate/Build and Operationalize data storage and processing infrastructure using Cloud native products. Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data. Take various project requirements and organize them into clear goals and objectives, and create a work breakdown structure to manage internal and external stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Bengaluru, Karnataka, India; Hyderabad, Telangana, India . Minimum qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, a related field, or equivalent practical experience. Experience in distributed data processing frameworks and modern age Google Cloud Platform (GCP) analytical and transactional data stores like BigQuery, CloudSQL, AlloyDB etc, and experience in one Database type to write SQLs. Experience in distributed data processing frameworks and modern age GCP analytical and transactional data stores like BigQuery, CloudSQL, AlloyDB etc, and experience in one Database type to write SQLs. Experience in GCP. Preferred qualifications: Experience in working with/on data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools, environments, and data structures. Experience with encryption techniques like symmetric, asymmetric, HSMs, and envelop, and ability to implement secure key storage using Key Management System. Experience in building multi-tier, high availability applications with modern technologies such as NoSQL, MongoDB, SparkML, and TensorFlow. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments. Experience in Big Data, information retrieval, data mining, or Machine Learning. Experience with IaC and CICD tools like Terraform, Ansible, Jenkins etc. About The Job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform. You will work on data migrations and modernization projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform/product tests. You will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. Additionally, you will work with Product Management and Product Engineering teams to build and drive excellence in our products. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Interact with stakeholders to translate customer requirements into recommendations for appropriate solution architectures and advisory services. Engage with technical leads, and partners to lead high velocity migration and modernization to Google Cloud Platform (GCP). Design, Migrate/Build and Operationalize data storage and processing infrastructure using Cloud native products. Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data. Take various project requirements and organize them into clear goals and objectives, and create a work breakdown structure to manage internal and external stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 1 week ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Bengaluru, Karnataka, India; Hyderabad, Telangana, India . Minimum qualifications: Bachelor's degree in Computer Science, Engineering, Mathematics, a related field, or equivalent practical experience. Experience in developing and troubleshooting data processing algorithms and software using Python, Java, Scala, Spark and hadoop frameworks. Experience in Google Cloud Platform. Experience in data processing frameworks and Google Cloud Platform with investigative and transactional data stores like BigQuery, CloudSQL, AlloyDB, etc. Preferred qualifications: Experience in Big Data, information retrieval, data mining, or Machine Learning. Experience in building applications with modern technologies like NoSQL, MongoDB, SparkML, and TensorFlow. Experience with architecting, developing software, or internet production-grade Big Data solutions in virtualized environments. Experience with Infrastructure as Code (IaC) and CI/CD tools like Terraform, Ansible, Jenkins, etc. Experience with encryption techniques like symmetric, asymmetric, Hardware Security Module (HSMs) and envelop with ability to implement secure key storage using Key Management System. Experience in working with data warehouses, including technical architectures, infrastructure components, ETL/ELT and reporting tools, environments, and data structures. About the job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. In this role, you will guide customers on how to ingest, store, process, analyze, and explore/visualize data on the Google Cloud Platform (GCP). You will work on data migrations and modernization projects, and with customers to design data processing systems, develop data pipelines optimized for scaling, and troubleshoot platform/product tests. You will have an understanding of data governance and security controls. You will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. You will work with Product Management and Product Engineering teams to build and drive excellence in products.Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Interact with stakeholders to translate customer requirements into recommendations for solution architectures and advisory services. Engage with technical leads, and partners to lead migration and modernization to Google Cloud Platform (GCP). Design, migrate/build, and operationalize data storage and processing infrastructure using Cloud native products. Develop and implement data quality and governance procedures to ensure the accuracy and reliability of data. Take project requirements and organize them into goals and objectives, and create a work breakdown structure to manage internal and external stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 1 week ago

Apply

0.0 - 2.0 years

0 Lacs

Hyderabad, Telangana

On-site

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Bengaluru, Karnataka, India; Hyderabad, Telangana, India . Minimum qualifications: Bachelor’s degree or equivalent practical experience. 2 years of experience with software development in one or more programming languages, or 1 year of experience with an advanced degree. 2 years of experience with data structures or algorithms in either an academic or industry setting. Preferred qualifications: Master's degree or PhD in Computer Science or related technical field. 2 years of experience with front-end frameworks, full-stack development, or API development. Experience developing accessible technologies. About the job Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward. Behind everything our users see online is the architecture built by the Technical Infrastructure team to keep it running. From developing and maintaining our data centers to building the next generation of Google platforms, we make Google's product portfolio possible. We're proud to be our engineers' engineers and love voiding warranties by taking things apart so we can rebuild them. We keep our networks up and running, ensuring our users have the best and fastest experience possible. With your technical expertise you will manage project priorities, deadlines, and deliverables. You will design, develop, test, deploy, maintain, and enhance software solutions. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Write product or system development code. Participate in, or lead design reviews with peers and stakeholders to decide amongst available technologies. Review code developed by other developers and provide feedback to ensure best practices (e.g., style guidelines, checking code in, accuracy, testability, and efficiency). Contribute to existing documentation or educational content and adapt content based on product/program updates and user feedback. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on hardware, network, or service operations and quality. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.

Posted 1 week ago

Apply

8.0 - 10.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Job Description We are seeking a highly skilled and experienced Technical Architect to lead the design and development of scalable, enterprise-grade applications and AI/ML solutions. The ideal candidate will have deep expertise in system architecture, hands-on experience with Python or C# .Net, and the ability to guide and mentor technical teams. This role involves client interaction, active participation in technical discussions, and solution designespecially for modern, AI-driven applications such as Retrieval-Augmented Generation (RAG) systems. Key Responsibilities Lead the architectural design and development of enterprise-level applications and AI/ML solutions. Collaborate with business and technical stakeholders to translate requirements into scalable and maintainable architectures. Design and implement end-to-end solutions with a focus on performance, security, and maintainability. Provide technical leadership and mentoring to development teams. Conduct code reviews, enforce best practices, and ensure adherence to architectural standards. Participate in technical discussions with client teams, providing guidance and strategic recommendations. Oversee integration of AI/ML components, with a strong emphasis on RAG-based solutions. Evaluate emerging technologies and drive innovation in architecture and solutioning. Work closely with DevOps and QA teams to support CI/CD, automated testing, and deployment practices. Required Skills And Qualifications 8 - 10 years of overall experience in software development and architecture. Proven experience designing and building large-scale enterprise applications. Proficient in either Python or C# .Net, with strong coding and debugging skills. Solid understanding of architectural patterns (e.g., microservices, event-driven, layered architecture). Hands-on experience with cloud platforms (e.g., AWS, Azure, or GCP). Strong experience working with databases (SQL and NoSQL), APIs, and integration patterns. Exposure to AI/ML solutions, especially RAG-based architectures (e.g., combining LLMs with vector databases, context-aware search). Familiarity with vector databases like FAISS, Pinecone, or Weaviate. Strong understanding of LLMs, embeddings, prompt engineering, and data pipelines. Excellent communication and interpersonal skills. Experience interacting with client stakeholders in technical discussions. Ability to manage technical teams, assign tasks, and ensure high-quality Qualifications : Experience with containerization (Docker, Kubernetes). Exposure to MLOps and deployment of AI models in production. Experience in Agile/Scrum methodologies. Certifications in cloud architecture (AWS Solutions Architect, Azure Architect, etc.) or AI/ML will be a plus.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

About This Role Role Brief Internal Audit’s primary mission is to provide assurance to the Board of Directors and Executive Management that BlackRock’s businesses are well managed and meeting strategic, operational and risk management objectives. The team engages with senior leaders and all of BlackRock’s individual business units globally to understand and advise on the risks in their business, evaluate the effectiveness of key processes and assist in the design of best practices that can improve their results. Internal Audit reports directly to the Audit Committee of the Board of Directors, and our work builds confidence that BlackRock will meet its obligations to clients, shareholders, employees and other stakeholders. Data Analytics The Data Analytics (DA) team leverages various data science, business intelligence, and analytical methods to enable and optimize a data-driven approach to assessing BlackRock’s control environment. The DA team is responsible for building and maintaining an inventory of self-service tools for auditors, supporting the risk assessment of BlackRock business units, and assisting the development of timely and accurate Internal Audit management information. DA team members may also design and perform certain testing as part of audits of BlackRock business units and technology controls across application systems and infrastructure components. India BlackRock India is a microcosm of the firm’s global operating platform that brings scaled capabilities in technology and investment management operations to support various functions, provide business continuity for critical operations, and drive innovation and operational excellence, including for domestic commercial initiatives. Responsibilities & Qualifications Role Description: This position is an Audit Execution Lead role. The primary responsibility is to support audits end-to-end by cleaning, analyzing, and visualizing data to identify potential risks, assess compliance with regulations, and provide valuable insights to auditors. The candidate will collaborate closely with business, technology and data teams, and internal risk partners to support the objectives of the Internal Audit function. This will involve working together to ensure audits are executed efficiently, with a focus on data-driven insights and solutions. As part of our DA team, we are seeking an independent contributor who is eager to learn new technologies and work with others to implement and explain them. We value creativity and encourage our team members to challenge traditional methods of audit execution and testing. Successful team members thrive in a fast-paced environment, actively contributing to audits while helping to evolve audit processes, tools, and methodologies. The role may include people manager responsibilities as such the candidate will be expected to demonstrate key leadership behaviors to foster a thriving, high-performance environment, individual contributors, as subject matter experts, guide technical direction in audit execution space, lead and contribute to multi-year projects, mentor less experienced specialists, and provide insights that influence long-term strategic decisions. Specific Responsibilities Will Include Develop code to execute audit tests, build tools, and/or execute data centric activities supporting the department and ensure that all code is properly documented and maintained. Contribute to the strategic development of the DA program including the design and implementation of tools and technologies, development, delivery, and distribution of data analytics presentations, training, and methodology. Propose alternative and creative approaches to audit testing, leveraging technology to either gain efficiencies or provide additional coverage. Facilitate discussions with audit stakeholders and demonstrate quick understanding of risk, controls and the data analytics solutions that can be offered to support audit objectives. Participate in short term data analysis activities aimed at supporting audit delivery and/or other ad-hoc requests including closure verification of issues, regulatory inquiries, strategic initiatives. Networking to cultivate strong relationships with firm-wide partners to ensure successful analytic activities such as retrieval of new data sets, learning technology architecture, troubleshooting, etc. Skills And Experience Bachelor’s or master’s degree in information systems, data analytics, data science, computer science, economics, risk management or another quantitative related field At least 5+ years in data analytics within Internal Audit preferably within the wealth management, asset management or banking industry. Strong SQL skills required, along with programming experience in Python or R, experience of any other scripting languages (VBA, PowerShell, etc.). Working experience of any Business Intelligence tool (PowerBI, Tableau) is preferred. Experience with both structured and unstructured data as well as experience with Data Warehousing (e.g. Snowflake), Extract Transform and Load (ETL). Hands on experience on techniques like text analytics, web scraping, Selenium, N-Gram analysis, Sentiment Analysis, etc. is preferred. This professional needs to have a strong understanding/ability to quickly learn of business processes, risks, and controls to enhance audit efficiencies and effectiveness through the development and delivery of audit data analytics Team player with project management skills, delivered timely with high quality results and have attention to detail along with analytical and problem-solving skills. Strong interpersonal and communication skills (verbal, written, and listening). Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.

Posted 1 week ago

Apply

46.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title : Gen AI / ML Data Scientist Location : Gurugram / Chennai (Hybrid or Onsite) Experience : 46 years Employment Type : Full-time Job Summary We are looking for an experienced Data Scientist with strong expertise in Generative AI and Machine Learning to join our team. The ideal candidate will have hands-on experience in building, deploying, and optimizing AI/ML models for various real-world use cases, especially in GenAI (LLMs, Transformers, Diffusion Models, Responsibilities : Design and develop ML/AI models including but not limited to supervised, unsupervised, NLP, and Gen AI techniques. Build and fine-tune LLM-based solutions (e.g., GPT, BERT, LLaMA) for enterprise applications like chatbots, summarization, information extraction, etc. Conduct data preprocessing, feature engineering, and model evaluation using appropriate ML/AI techniques. Collaborate with data engineers, MLOps, and product teams to deploy models in production. Conduct research and stay up to date on emerging trends in Gen AI and ML. Work on Prompt Engineering, Retrieval-Augmented Generation (RAG), embeddings, and vector databases. Present findings and insights to business stakeholders and non-technical Skills & Experience : 5 years of experience in Data Science, ML, and AI. Strong experience with Generative AI frameworks (OpenAI, Hugging Face Transformers, LangChain, LlamaIndex, etc.). Proficiency in Python, with experience in libraries like Scikit-learn, TensorFlow/PyTorch, pandas, NumPy, etc. Experience working with LLMs, vector databases (e.g., FAISS, Pinecone), and large datasets. Familiarity with cloud platforms like AWS, GCP, or Azure for AI/ML model deployment. Good understanding of MLOps pipelines, CI/CD for ML, and model monitoring. Strong problem-solving and communication to Have : Knowledge of diffusion models, image/video generation, or multimodal models. Experience with data visualization tools (e.g., Tableau, Power BI, Plotly). Domain knowledge in BFSI, healthcare, or retail is a Qualification : Bachelors or Masters degree in Computer Science, Statistics, Mathematics, or a related field. (ref:hirist.tech)

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description Java Developer with expertise in Prompt Engineering to join our AI-driven development team. The ideal candidate will combine robust Java backend development capabilities with hands-on experience in integrating and fine-tuning LLMs (e.g., OpenAI, Cohere, Mistral, or Anthropic), designing effective prompts, and embedding AI functionality into enterprise applications. This role is ideal for candidates passionate about merging traditional enterprise development with cutting-edge AI technologies. Key Responsibilities Design, develop, and maintain scalable backend systems using Java (Spring Boot) and integrate AI/LLM services. Collaborate with AI/ML engineers and product teams to design prompt templates, test prompt effectiveness, and iterate for accuracy, performance, and safety. Build and manage RESTful APIs that interface with LLM services and microservices in production-grade environments. Fine-tune prompt formats for various AI tasks (e.g., summarization, extraction, Q&A, chatbots) and optimize for performance and cost. Apply RAG (Retrieval-Augmented Generation) patterns to retrieve relevant context from data stores for LLM input. Ensure secure, efficient, and scalable communication between LLM APIs (OpenAI, Google Gemini, Azure OpenAI, etc.) and internal systems. Develop reusable tools and frameworks to support prompt evaluation, logging, and improvement cycles. Write high-quality unit tests, conduct code reviews, and maintain CI/CD pipelines using tools like Jenkins, GitHub Actions, or GitLab. Work in Agile/Scrum teams and contribute to sprint planning, estimation, and retrospectives. Must-Have Technical Skills Java & Backend Development : Core Java 8/11/17 Spring Boot, Spring MVC, Spring Data JPA RESTful APIs, JSON, Swagger/OpenAPI Hibernate or other ORM tools Microservices architecture Prompt Engineering / LLM Integration : Experience working with OpenAI (GPT-4, GPT-3.5), Claude, Llama, Gemini, or Mistral models. Designing effective prompts for various tasks (classification, summarization, Q&A, etc.) Familiarity with prompt chaining, zero-shot/few-shot learning Understanding of token limits, temperature, top_up, and stop sequences Prompt evaluation methods and frameworks (e.g., LangChain, LlamaIndex, Guidance, PromptLayer) AI Integration Tools : LangChain or LlamaIndex for building LLM applications API integration with AI platforms (OpenAI, Azure AI, Hugging Face, etc.) Vector databases (e.g., Pinecone, FAISS, Weaviate, ChromaDB) DevOps / Deployment : Docker, Kubernetes (preferred) CI/CD tools (Jenkins, GitHub Actions) AWS/GCP/Azure cloud environments Monitoring : Prometheus, Grafana, ELK Stack Good-to-Have Skills Python for prototyping AI workflows Chatbot development using LLMs Experience with RAG pipelines and semantic search Hands-on with GitOps, IaC (Terraform), or serverless functions Experience integrating LLMs into enterprise SaaS products Knowledge of Responsible AI and bias mitigation strategies Soft Skills Strong problem-solving and analytical thinking Excellent written and verbal communication skills Willingness to learn and adapt in a fast-paced, AI-evolving environment Ability to mentor junior developers and contribute to tech strategy Education Bachelors or Masters degree in Computer Science, Engineering, or related field Preferred Certifications (Not Mandatory) : OpenAI Developer or Azure AI Certification Oracle Certified Java Professional AWS/GCP Cloud Certifications (ref:hirist.tech)

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

The Ops Support Specialist 5 is an entry-level position responsible for providing operations support services, including but not limited to; record/documentation maintenance, storage & retrieval of records, account maintenance, imaging and the opening of accounts in coordination with the Operations - Core Team. Additionally, the Ops Support Specialist 5 serves as the liaison between operations staff, relationship managers, project managers, custodians and clients. The overall objective of this role is to provide day-to-day operations support in alignment with Citi operations support infrastructure and processes. Responsibilities: Resolve customer inquiries and supervise escalated issues, providing efficient and effective customer service to Citi’s clients Identify opportunities to offer value added products and services while adhering to strict laws and regulation governing Telesales Communicate daily with management on productivity, quality, availability, Management Information System (MIS) indicators, as well as providing written and oral communications to supported business areas for approval of correct financial entries and resolution of incorrect entries Facilitate training based on needs of staff within the department and assist with answering staff questions within Disputes, as needed Support expansive and diverse array of products and services Assist with ongoing Lean and process improvement projects Resolve complex problems based on best practice/precedence, escalating as needed Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 2-4 years of relevant experience Proficient in Microsoft Office Comprehensive knowledge of Dispute process Ability to work unsupervised and apply problem solve capabilities Ability to work occasional weekends to support Pega releases and COB testing Working knowledge of Pega and/or G36 functionality, Continuity of Business (CoB) testing, and creating and resolving Trust Receipts (TR’s) Demonstrated analytical skills and mathematical knowledge Consistently demonstrates clear and concise written and verbal communication skills Education: High School diploma or equivalent This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Operations - Core ------------------------------------------------------ Job Family: Operations Support ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 week ago

Apply

0.0 - 2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

The Ops Sup Analyst 2 is an intermediate level position responsible for providing operations support services, including but not limited to; record/documentation maintenance, storage & retrieval of records, account maintenance, imaging and the opening of accounts in coordination with the Operations - Core Team. Additionally, the Ops Sup Analyst 2 serves as the liaison between operations staff, relationship managers, project managers, custodians and clients. The overall objective of this role is to provide day-to-day operations support in alignment with Citi operations support infrastructure and processes. Responsibilities: Update help content used by Knowledge Hub end users to service client inquires, as needed Execute work assigned, including annual review certification and change requests Serve as liaison to business for work assignments by asking fact finding questions, following up on open items and helping with content approval Conduct needs assessment and update content or develop content related solutions according to business requirements Research and seek out solutions to inquiries on help content and all other open items related to business including policy gaps and changes Monitor work progression ensuring completion of assignments by requested due date Ensure consistent application of team process controls Fulfilling the clients’ necessities while providing an exceptional client experience is the expected behavior from all our employees and it will be measured by specific metrics. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 0-2 years relevant experience Proficient in Microsoft Office Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Self-motivated and detail oriented Proven organization and time management skills Demonstrated problem-solving and decision-making skills Consistently demonstrates clear and concise written and verbal communication skills Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Operations - Core ------------------------------------------------------ Job Family: Operations Support ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

The Ops Sup Analyst 1 is an entry-level position responsible for providing operations support services, including but not limited to; record/documentation maintenance, storage & retrieval of records, account maintenance, imaging and the opening of accounts in coordination with the Operations - Core Team. Additionally, the Ops Sup Analyst 1 serves as the liaison between operations staff, relationship managers, project managers, custodians and clients. The overall objective of this role is to provide day-to-day operations support in alignment with Citi operations support infrastructure and processes. Responsibilities: Perform business analysis and documentation of the current and future state of Client Reports and Advices (client communication letters, notices, and confirms) Provide regular status updates for all project participants and create presentations for steering committee updates Work with various Legal & Compliance teams to obtain sign-off on all regulatory business requirements Serve as primary liaison between the key business stakeholder and technology, including recommending business priorities by advising stakeholders on options, risks, costs, prioritizations, and delivery timelines Recommend business priorities by advising stakeholders on options, risks, costs, prioritizations, and delivery timelines Create and facilitate training sessions, webcast demos and write User Acceptance Test scripts and business scenarios against specified requirements Create, manage and maintain project plans and act as the project manager for all follow ups across various departments Work on multiple projects in parallel focusing on continued delivery of regulatory client deliverables, such as legal statements/performance reporting/advices/letters/notices Fulfilling the clients’ necessities while providing an exceptional client experience is the expected behavior from all our employees and it will be measured by specific metrics. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: Previous relevant experience preferred Proficient in Microsoft Office General knowledge of client reporting across the industry and our competitors Working knowledge of SQL environments and database queries Proven organization and time management skills Demonstrated problem-solving and decision-making skills Consistently demonstrates clear and concise written and verbal communication skills Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Operations - Core ------------------------------------------------------ Job Family: Operations Support ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 week ago

Apply

7.0 years

0 Lacs

New Delhi, Delhi, India

Remote

Job Description: Airbus FHS provides customized services to its Customers (mainly Airlines) ranging from FHS-Component upto a TSP (Tailored Support Program) with the objective to provide airlines with significant inventory management and repair cost savings whilst supporting the improvement of their maintenance and engineering activities to allow an increased aircraft reliability and availability. Accountabilities The Jobholder, as a member of the Flight Hour Services (FHS) entity, reports operationally to the HO Materials - India & South Asia . The job holder functionally reports to the Head of FHS Customer Operations (SMROC) & Head of FHS Supply Chain Operations (SMROC) based in Toulouse . SMROC & SMROS are responsible for the oversight of all the operational management activities related to the FHS and TSP Component contracts, the monitoring of the delivery performance of all improvement action plans, definition of fixes and the monitoring of their implementation / effectiveness. As a reminder the FHS Operations team in India & South Asia is organised into two individual domains for the effective management and oversight of all operational activities related to FHS Operations. Local for Local & Local For Global Customer Support (IISMOC) FHS Customer Operations Team (SMROC - FCOs) - (MEA) South Asia FHS Customer Operations Local For Global Supply Chain Support (IISMOS) Virtual Direct Shipment Officers (SMROS - VDSO) Abnormal Taskforce (SMROS) Component Ageing Taskforce The Jobholder is accountable to: Ensuring daily FHS operational performance, managing the local interface with Customer(s) to secure their satisfaction. Ensuring adherence to contract(s) service level (operational, quality & cost performance) through clear and consistent reporting of Key Performance Indicators. Ensuring Supply of all required operational data to the CPM to secure the overall commercial performance and development of the deal(s) Ensuring Communication on progress of actions plans to resolve supply chain issues to the Customer(s). Entry Into Service planning and work streams management of new Component Deals Ensure Warranty Administration & Coordination with Supplier/Airbus for warranty claim and remedial action with passing benefits to customer(s) (if covered in contract scope) Develop & Ensure Component repair activities within the region including Managing Repair Loop & Coordination with Supplier & customer Ensuring DSO (Direct Shipment Officer) activities on relevant component deals Ensuring Material Planning (TCI Items) Business Administration & General Administration activities are administered under scope of FHS-TSP contract(s) Support Sales campaign in the region as directed & contribute to business development activities in order to enhance regional footprint Monitor the company procedures applicable to the area of work and submit any proposals for such revisions to optimise the quality and effectiveness of those procedures. Resolve operational queries from other departments, Customer & function. Ensure continuous monitoring of all the 12 legs of the Supply chain for any blockages in terms of abnormal transactions, ownerships or part location. Ensuring Virtual DSO (Direct Shipment Officer) team is optimising the AFHS Supply chain management Develop & Ensure Component Supply Chain Improvement activities within the region are aligned with FHS Business Strategy and regionalization footprint Dimensions Subordinate employees (FTE headcount): 11 (AOP 2025) Other dimensions relevant to the position: Fleet currently covered: FCO -AIC, JZR,ETD,MSC,FAD VDSO - ETD,FIN,BAW Main activities Within FHS-TSP & FHS -C contract(s), jobholder is responsible for the organisation and management of Component Operations team which is accountable to: Deliver and monitor the daily operational FHS activities with the customer(s) Ensure respect of contractual performance, service level and customer satisfaction as per FHS agreements and financial results Initiate all appropriate improvement actions to optimise operational performance of the FHS contract(s) Ensure smooth EIS of the FHS services and customer satisfaction with initial operations on new component deal(s) Administer warranty claims on FHS TSP contract(s) as per relevant support clauses & run dashboarding including reporting to customer (If covered in contract) Perform Exchange Ordering, Repair Ordering, AMASIS transactions (as applicable) , Monitor of Shipping & Customs Clearance activities (as applicable) & Direct Shipment Officer activities (On site Or remotely as applicable) related to parts covered under FHS contract (s) Coordinate closely with FTM TSP/CT Technical Records to achieve nominal production & delivery flow (S2S) Perform Material Planning for TCI, Life limited Items based on Forecast issued by TSP-Planning for FHS TSP contract(s) Perform Business Administration & General Administration activities related to execution & monitoring of FHS TSP contract(s) Ensure Abnormal task force team is optimising the Shelf to Shelf for all the AFHS Components in continuous collaboration with Kuala Lumpur and Toulouse teams VDSO -Ensure end-to-end monitoring of Leg 6 for the assigned customers thereby supporting the component supply ecosystem Perform all activities related to repair of FHS Components within the region including but not limited to Coordination with Supplier for meeting TAT, Quality AMASIS Transaction, Repair Loop and Logistics management Coordination with customer for retrieval of Core Unit(s) With regards with management responsibility, jobholder missions consists in: Organisation and staffing of theComponent team as per business requirement. Putting in place and running a group operating model allowing control of business activities (performance, risk...) and associated resources in line with AOS (Airbus Operation System) principles. Ensuring her/his team objectives are defined and manage individual performance of team members. Managing team skills, competences and knowledge. Developing processes, methods and tools with the aim to continuously improve efficiency and quality of services delivered. Actively reports safety related issues and any other CIM related issues, and in relation, participates in the whole process of finding a resolution to avoid future recurrence. Acting with respect to ethics and compliance with Airbus corporate rules. Outputs Component Operations: Contractual performance, service level and customer satisfaction as per FHS C agreements and financial results, Warranty administration, Business & General Administration, Logistics activity as per FHS TSP contract, Sales & Business Development Support for the region, Control of FHS C Regional repair activities Team organisation. Team reporting. Team engagement to reach assigned objectives. Experience, Skills & Competencies Education Degree holder in Aerospace Engineering/Aircraft Maintenance or equivalent Fluent English Technical knowledge: Total aviation experience of 7 Years at a minimum 5+ years of experience in Aviation Logistics environment Experience in working with OEM,Suppliers or MRO Operations. Experience in team management. Leadership Skills. Excellent team spirit. Highly organised and structured Capacity to work in a dynamic environment. Good communication skills and experience in customer management Knowledge of Airline Operations and/or Power by Hour Hour Services related activities is preferred Knowledge of Manufacturer Warranty , Supplier Warranty Management, Airline Logistics and Supply Chain Management is preferred Knowledge of Maintenance Information System principle required. Knowledge on specific Maintenance Information systems (AMASIS, RAMCO, AMOS) desirable. Excellent level of spoken and written English This job requires an awareness of any potential compliance risks and a commitment to act with integrity, as the foundation for the Company’s success, reputation and sustainable growth. Company: Airbus India Private Limited Employment Type: Permanent------- Experience Level: Professional Job Family: Material Support & services By submitting your CV or application you are consenting to Airbus using and storing information about you for monitoring purposes relating to your application or future employment. This information will only be used by Airbus.Airbus is committed to achieving workforce diversity and creating an inclusive working environment. We welcome all applications irrespective of social and cultural background, age, gender, disability, sexual orientation or religious belief. Airbus is, and always has been, committed to equal opportunities for all. As such, we will never ask for any type of monetary exchange in the frame of a recruitment process. Any impersonation of Airbus to do so should be reported to emsom@airbus.com . At Airbus, we support you to work, connect and collaborate more easily and flexibly. Wherever possible, we foster flexible working arrangements to stimulate innovative thinking.

Posted 1 week ago

Apply

5.0 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Location(s): Tower -11, (IT/ITES) SEZ of M/s Gurugram Infospace Ltd, Vill. Dundahera, Sector-21, Gurugram, Haryana, Gurugram, Haryana, 122016, IN Line Of Business: Data Estate(DE) Job Category: Engineering & Technology Experience Level: Experienced Hire At Moody's, we unite the brightest minds to turn today’s risks into tomorrow’s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are-with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. Job Summary: We are seeking an experienced Senior Data Specialist to join our team. The ideal candidate will have extensive hands-on experience with the latest Oracle database versions and Databricks, particularly in a large enterprise environment. You will be responsible for designing, developing, optimizing, and maintaining complex SQL and PL/SQL solutions to support business analysis and data transformations. Key Responsibilities: Design and develop advanced PL/SQL packages, procedures, and triggers for high-volume transactional systems. Write and optimize complex SQL queries for efficient data retrieval and manipulation. Leverage Oracle analytical functions, collections, and advanced features to build high-performance solutions. Perform query performance tuning using optimizer hints and execution plan analysis. Collaborate with cross-functional teams to support application development and data integration. Work within large-scale enterprise database systems, ensuring high availability and data integrity. Provide support for database design, schema management, and data modeling activities. Document technical specifications, procedures, and best practices. Required Skills and Qualifications: Minimum 5-6 years of hands-on experience with Oracle Database (latest versions preferred). Proven experience working in large-scale enterprise database environments. Strong expertise in SQL and PL/SQL programming, including the ability to write complex logic and efficient code. In-depth understanding of Oracle analytical functions, collections, and advanced PL/SQL features. Solid knowledge and hands-on experience in query tuning using optimizer hints. Experience with Python and HTML for scripting and integration tasks. Experience with DataBricks. Excellent verbal and written communication skills. Preferred Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. Moody’s is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, sexual orientation, gender expression, gender identity or any other characteristic protected by law. Candidates for Moody's Corporation may be asked to disclose securities holdings pursuant to Moody’s Policy for Securities Trading and the requirements of the position. Employment is contingent upon compliance with the Policy, including remediation of positions in those holdings as necessary. For more information on the Securities Trading Program, please refer to the STP Quick Reference guide on ComplianceNet Please note: STP categories are assigned by the hiring teams and are subject to change over the course of an employee’s tenure with Moody’s.

Posted 1 week ago

Apply

8.0 years

0 Lacs

Delhi Cantonment, Delhi, India

On-site

Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA Seeking a talented Solution Architect/BDM for On-Prem/Private AI. Requires deep open source LLM expertise to translate client needs into technical solutions. Responsibilities include assessing needs, recommending LLM tech, sizing opportunities and infrastructure, and collaborating on end-to-end solutions with costing. Needs strategic thinking, strong technical and business skills to drive innovation and client value. What You'll Be Doing Key Roles and Responsibilities: Solution Architecture & Technical Leadership Demonstrate deep expertise in LLMs such as Phi-4, Mistral, Gemma, Llama and other foundation models Assess client business requirements and translate them into detailed technical specifications Recommend appropriate LLM solutions based on specific business outcomes and use cases Experience in sizing and architecting infrastructure for AI/ML workloads, particularly GPU-based systems. Design scalable and secure On-Prem/Private AI architectures Create technical POCs and prototypes to demonstrate solution capabilities Hands-on experience with vector databases (open-source or proprietary), such as Weaviate, Milvus, or Vald etc. Expertise in fine-tuning, query caching, and optimizing vector embeddings for efficient similarity searches Business Development Size and qualify opportunities in the On-Prem/Private AI space Develop compelling proposals and solution presentations for clients Build and nurture client relationships at technical and executive levels Collaborate with sales teams to create competitive go-to-market strategies Identify new business opportunities through technical consultation Project & Delivery Leadership Work with delivery teams to develop end-to-end solution approaches and accurate costing Lead technical discovery sessions with clients Guide implementation teams during solution delivery Ensure technical solutions meet client requirements and business outcomes Develop reusable solution components and frameworks to accelerate delivery AI Agent Development Design, develop, and deploy AI-powered applications leveraging agentic AI frameworks such as LangChain, AutoGen, and CrewAI. Utilize the modular components of these frameworks (LLMs, Prompt Templates, Agents, Memory, Retrieval, Tools) to build sophisticated language model systems and multi-agent workflows. Implement Retrieval Augmented Generation (RAG) pipelines and other advanced techniques using these frameworks to enhance LLM responses with external data. Contribute to the development of reusable components and best practices for agentic AI implementations. Knowledge, Skills, and Attributes: Basic Qualifications: 8+ years of experience in solution architecture or technical consulting roles 3+ years of specialized experience working with LLMs and Private AI solutions Demonstrated expertise with models such as Phi-4, Mistral, Gemma, and other foundation models Strong understanding of GPU infrastructure sizing and optimization for AI workloads Proven experience converting business requirements into technical specifications Experience working with delivery teams to create end-to-end solutions with accurate costing Strong understanding of agentic AI systems and orchestration frameworks Bachelor’s degree in computer science, AI, or related field Ability to travel up to 25% Preferred Qualifications: Master's degree or PhD in Computer Science or related technical field. Experience with Private AI deployment and fine-tuning LLMs for specific use cases Knowledge of RAG (Retrieval Augmented Generation) and enterprise knowledge systems Hands-on experience with prompt engineering and LLM optimization techniques Understanding of AI governance, security, and compliance requirements Experience with major AI providers: OpenAI/Azure OpenAI, AWS, Google, Anthropic, etc. Prior experience in business development or pre-sales for AI solutions Excellent verbal and written communication skills, with the ability to explain complex technical concepts to non-technical stakeholders Strong problem-solving abilities and analytical mindset Location: Delhi or Bangalore Workplace type: Hybrid Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

This job is with Swiss Re, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. About The Team And Our Scope We are a forward-thinking tech organization within Swiss Re, delivering transformative AI/ML solutions that redefine how businesses operate. Our mission is to build intelligent, secure, and scalable systems that deliver real-time insights, automation, and high-impact user experiences to clients globally. You'll join a high-velocity AI/ML team working closely with product managers, architects, and engineers to create next-gen enterprise-grade solutions. Our team is built on a startup mindset — bias to action, fast iterations, and ruthless focus on value delivery. We’re not only shaping the future of AI in business — we’re shaping the future of talent. This role is ideal for someone passionate about advanced AI engineering today and curious about evolving into a product leadership role tomorrow. You'll get exposure to customer discovery, roadmap planning, and strategic decision-making alongside your technical contributions. Role Overview As an AI/ML Engineer, you will play a pivotal role in the research, development, and deployment of next-generation GenAI and machine learning solutions . Your scope will go beyond retrieval-augmented generation (RAG) to include areas such as prompt engineering, long-context LLM orchestration, multi-modal model integration (voice, text, image, PDF), and agent-based workflows. You will help assess trade-offs between RAG and context-native strategies, explore hybrid techniques, and build intelligent pipelines that blend structured and unstructured data. You’ll work with technologies such as LLMs, vector databases, orchestration frameworks, prompt chaining libraries, and embedding models, embedding intelligence into complex, business-critical systems. This role sits at the intersection of rapid GenAI prototyping and rigorous enterprise deployment, giving you hands-on influence over both the technical stack and the emerging product direction. Key Responsibilities Build Next-Gen GenAI Pipelines: Design, implement, and optimize pipelines across RAG, prompt engineering, long-context input handling, and multi-modal processing. Prototype, Validate, Deploy: Rapidly test ideas through PoCs, validate performance against real-world business use cases, and industrialize successful patterns. Ingest, Enrich, Embed: Construct ingestion workflows including OCR, chunking, embeddings, and indexing into vector databases to unlock unstructured data. Integrate Seamlessly: Embed GenAI services into critical business workflows, balancing scalability, compliance, latency, and observability. Explore Hybrid Strategies: Combine RAG with context-native models, retrieval mechanisms, and agentic reasoning to build robust hybrid architectures. Drive Impact with Product Thinking: Collaborate with product managers and UX designers to shape user-centric solutions and understand business context. Ensure Enterprise-Grade Quality: Deliver solutions that are secure, compliant (e.g., GDPR), explainable, and resilient — especially in regulated environments. What Makes You a Fit Must-Have Technical Expertise Proven experience with GenAI techniques and LLMs, including RAG, long-context inference, prompt tuning, and multi-modal integration. Strong hands-on skills with Python, embedding models, and orchestration libraries (e.g., LangChain, Semantic Kernel, or equivalents). Comfort with MLOps practices, including version control, CI/CD pipelines, model monitoring, and reproducibility. Ability to operate independently, deliver iteratively, and challenge assumptions with data-driven insight. Understanding of vector search optimization and retrieval tuning. Exposure to multi-modal models Nice-To-Have Qualifications Experience building and operating AI systems in regulated industries (e.g., insurance, finance, healthcare). Familiarity with Azure AI ecosystem (e.g., Azure OpenAI, Azure AI Document Intelligence, Azure Cognitive Search) and deployment practices in cloud-native environments. Experience with agentic AI architectures, tools like AutoGen, or prompt chaining frameworks. Familiarity with data privacy and auditability principles in enterprise AI. Bonus: You Think Like a Product Manager While this role is technical at its core, we highly value candidates who are curious about how AI features become products . If you’re excited by the idea of influencing roadmaps, shaping requirements, or owning end-to-end value delivery — we’ll give you space to grow into it. This is a role where engineering and product are not silos . If you’re keen to move in that direction, we’ll mentor and support your evolution. Why Join Us? You’ll be part of a team that’s pushing AI/ML into uncharted, high-value territory. We operate with urgency, autonomy, and deep collaboration. You’ll prototype fast, deliver often, and see your work shape real-world outcomes — whether in underwriting, claims, or data orchestration. And if you're looking to transition from deep tech to product leadership , this role is a launchpad. Swiss Re is an equal opportunity employer . We celebrate diversity and are committed to creating an inclusive environment for all employees. About Swiss Re Swiss Re is one of the world’s leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime. Combining experience with creative thinking and cutting-edge expertise, we create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 14,000 employees across the world. Our success depends on our ability to build an inclusive culture encouraging fresh perspectives and innovative thinking. We embrace a workplace where everyone has equal opportunities to thrive and develop professionally regardless of their age, gender, race, ethnicity, gender identity and/or expression, sexual orientation, physical or mental ability, skillset, thought or other characteristics. In our inclusive and flexible environment everyone can bring their authentic selves to work and their passion for sustainability. If you are an experienced professional returning to the workforce after a career break, we encourage you to apply for open positions that match your skills and experience. swissre_footer { position: relative; margin-top: -50px; height: 30px; clear: both; margin-bottom: 20px; background: #EEE none repeat scroll 0% 0%; line-height: 30px; padding: 0px 10px; color: #AAA; font-family: "Arial,Helvetica,sans-serif"; } .swissre_jobtemplate { width: 970px; max-width: 100%; height: auto; } .jobDisplay .job { font-family: "Arial" !important; font-size: 12px !important; } .joqReqDescription { max-width: 100%; height: auto; align: center; } .joqReqDescription ul { width: 787px; max-width: 100%; } .joqReqDescription p { width: 827px; max-width: 100%; } Keywords Reference Code: 134317

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers further. This is a world of more possibilities, more innovation, more openness, and the sky is the limit thinking in a cloud-enabled world. Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. Within Azure Data, the Microsoft Fabric platform team builds and maintains the operating system and provides customers a unified data stack to run an entire data estate. The platform provides a unified experience, unified governance, enables a unified business model and a unified architecture. The Fabric Data Analytics, Insights, and Curation team is leading the way at understanding the Microsoft Fabric composite services and empowering our strategic business leaders. We work with very large and fast arriving data and transform it into trustworthy insights. We build and manage pipelines, transformation, platforms, models, and so much more that empowers the Fabric product. As an Engineer on our team your core function will be Data Engineering with opportunities in Analytics, Science, Software Engineering, DEVOps, and Cloud Systems. You will be working alongside other Engineers, Scientists, Product, Architecture, and Visionaries bringing forth the next generation of data democratization products. We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served. Responsibilities You will develop and maintain data pipelines, including solutions for data collection, management, transformation, and usage, ensuring accurate data ingestion and readiness for downstream analysis, visualization, and AI model training You will review, design, and implement end-to-end software life cycles, encompassing design, development, CI/CD, service reliability, recoverability, and participation in agile development practices, including on-call rotation You will review and write code to implement performance monitoring protocols across data pipelines, building visualizations and aggregations to monitor pipeline health. You’ll also implement solutions and self-healing processes that minimize points of failure across multiple product features You will anticipate data governance needs, designing data modeling and handling procedures to ensure compliance with all applicable laws and policies You will plan, implement, and enforce security and access control measures to protect sensitive resources and data You will perform database administration tasks, including maintenance, and performance monitoring. You will collaborate with Product Managers, Data and Applied Scientists, Software and Quality Engineers, and other stakeholders to understand data requirements and deliver phased solutions that meet test and quality programs data needs, and support AI model training and inference You will become an SME of our teams’ products and provide inputs for strategic vision You will champion process, engineering, architecture, and product best practices in the team You will work with other team Seniors and Principles to establish best practices in our organization Embody our culture and values Qualifications Required/Minimum Qualifications Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years' experience in business analytics, data science, software development, data modeling or data engineering work OR Master's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 2+ years' experience in business analytics, data science, software development, or data engineering work OR equivalent experience 2+ years of experience in software or data engineering, with proven proficiency in C#, Java, or equivalent 2+ years in one scripting language for data retrieval and manipulation (e.g., SQL or KQL) 2+ years of experience with ETL and data cloud computing technologies, including Azure Data Lake, Azure Data Factory, Azure Synapse, Azure Logic Apps, Azure Functions, Azure Data Explorer, and Power BI or equivalent platforms Preferred/Additional Qualifications 1+ years of demonstrated experience implementing data governance practices, including data access, security and privacy controls and monitoring to comply with regulatory standards. Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Equal Opportunity Employer (EOP) #azdat #azuredata #fabricdata #dataintegration #azure #synapse #databases #analytics #science Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers further. This is a world of more possibilities, more innovation, more openness, and the sky is the limit thinking in a cloud-enabled world. Microsoft’s Azure Data engineering team is leading the transformation of analytics in the world of data with products like databases, data integration, big data analytics, messaging & real-time analytics, and business intelligence. The products our portfolio include Microsoft Fabric, Azure SQL DB, Azure Cosmos DB, Azure PostgreSQL, Azure Data Factory, Azure Synapse Analytics, Azure Service Bus, Azure Event Grid, and Power BI. Our mission is to build the data platform for the age of AI, powering a new class of data-first applications and driving a data culture. Within Azure Data, the Microsoft Fabric platform team builds and maintains the operating system and provides customers a unified data stack to run an entire data estate. The platform provides a unified experience, unified governance, enables a unified business model and a unified architecture. The Fabric Data Analytics, Insights, and Curation team is leading the way at understanding the Microsoft Fabric composite services and empowering our strategic business leaders. We work with very large and fast arriving data and transform it into trustworthy insights. We build and manage pipelines, transformation, platforms, models, and so much more that empowers the Fabric product. As an Engineer on our team your core function will be Data Engineering with opportunities in Analytics, Science, Software Engineering, DEVOps, and Cloud Systems. You will be working alongside other Engineers, Scientists, Product, Architecture, and Visionaries bringing forth the next generation of data democratization products. We do not just value differences or different perspectives. We seek them out and invite them in so we can tap into the collective power of everyone in the company. As a result, our customers are better served. Responsibilities You will develop and maintain data pipelines, including solutions for data collection, management, transformation, and usage, ensuring accurate data ingestion and readiness for downstream analysis, visualization, and AI model training You will review, design, and implement end-to-end software life cycles, encompassing design, development, CI/CD, service reliability, recoverability, and participation in agile development practices, including on-call rotation You will review and write code to implement performance monitoring protocols across data pipelines, building visualizations and aggregations to monitor pipeline health. You’ll also implement solutions and self-healing processes that minimize points of failure across multiple product features You will anticipate data governance needs, designing data modeling and handling procedures to ensure compliance with all applicable laws and policies You will plan, implement, and enforce security and access control measures to protect sensitive resources and data You will perform database administration tasks, including maintenance, and performance monitoring. You will collaborate with Product Managers, Data and Applied Scientists, Software and Quality Engineers, and other stakeholders to understand data requirements and deliver phased solutions that meet test and quality programs data needs, and support AI model training and inference You will become an SME of our teams’ products and provide inputs for strategic vision You will champion process, engineering, architecture, and product best practices in the team You will work with other team Seniors and Principles to establish best practices in our organization Embody our culture and values Qualifications Required/Minimum Qualifications Bachelor's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 4+ years' experience in business analytics, data science, software development, data modeling or data engineering work OR Master's Degree in Computer Science, Math, Software Engineering, Computer Engineering, or related field AND 2+ years' experience in business analytics, data science, software development, or data engineering work OR equivalent experience 2+ years of experience in software or data engineering, with proven proficiency in C#, Java, or equivalent 2+ years in one scripting language for data retrieval and manipulation (e.g., SQL or KQL) 2+ years of experience with ETL and data cloud computing technologies, including Azure Data Lake, Azure Data Factory, Azure Synapse, Azure Logic Apps, Azure Functions, Azure Data Explorer, and Power BI or equivalent platforms Preferred/Additional Qualifications 1+ years of demonstrated experience implementing data governance practices, including data access, security and privacy controls and monitoring to comply with regulatory standards. Other Requirements Ability to meet Microsoft, customer and/or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter. Equal Opportunity Employer (EOP) #azdat #azuredata #fabricdata #dataintegration #azure #synapse #databases #analytics #science Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Note: Please apply only if you have 6 years or more of relevant experience (excluding internship) Comfortable working 5-days a week from Gurugram, Haryana Are an immediate joiner or currently serving your notice period About Eucloid At Eucloid, innovation meets impact. As a leader in AI and Data Science, we create solutions that redefine industries—from Hi-tech and D2C to Healthcare and SaaS. With partnerships with giants like Databricks, Google Cloud, and Adobe, we’re pushing boundaries and building next-gen technology. Join our talented team of engineers, scientists, and visionaries from top institutes like IITs, IIMs, and NITs. At Eucloid, growth is a promise, and your work will drive transformative results for Fortune 100 clients. What You’ll Do Design and implement robust frameworks for evaluating large language models (LLMs) across dimensions like accuracy, safety, hallucination, and reasoning. Build modular pipelines for automated, semi-automated, and human-in-the-loop evaluations. Integrate GenAI testing tools such as Giskard, RAGAS, DeepEval, TruLens, Opik/Comet, and LangSmith. Define and implement custom evaluation metrics tailored to use cases like RAG, agents, and safety guardrails. Curate or generate high-quality evaluation datasets across domains (e.g., legal, medical, QA, coding). Collaborate with developers to instrument tracing and logging for real-world model behavior capture. Build dashboards and reporting mechanisms to visualize performance, regressions, and model comparisons. Conduct prompt-based testing, chain-of-thought evaluations, adversarial testing, and A/B comparisons. Contribute to red-teaming and stress-testing efforts to uncover vulnerabilities and ethical risks. What Makes You a Fit Academic Background: Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. Technical Expertise: Minimum 6 years of hands-on experience in building, testing, or evaluating AI/ML systems , with a strong focus on LLMs or Generative AI applications. Proficiency in Python , along with experience using ML/NLP libraries such as Hugging Face, LangChain, OpenAI SDK, or Cohere. Experience in building evaluation pipelines or benchmarks for LLM performance across metrics like accuracy, robustness, safety, and hallucination. Deep understanding of prompt engineering , retrieval-augmented generation (RAG) , and agentic evaluation techniques. Hands-on familiarity with evaluation tools such as Giskard, RAGAS, DeepEval, TruLens, LangSmith, Opik/Comet, or similar. Working knowledge of vector databases like FAISS, Pinecone, or Weaviate, and embedding-based evaluation methods. Experience with CI/CD pipelines , unit/integration testing for LLM apps, and model versioning for reproducibility. Ability to define custom evaluation metrics tailored to specific use cases (e.g., RAG performance, guardrail compliance, hallucination detection). Strong grasp of model instrumentation techniques for tracing/logging model behavior in real-world flows. Extra Skills: Experience in developing LLM-based applications such as chatbots, copilots, or RAG systems. Exposure to designing or evaluating AI safety systems (e.g., jailbreaking prevention, content filters). Open-source contributions to GenAI tooling or evaluation libraries. Strong communication and documentation skills. Comfort working in fast-paced, research-heavy environments. Why You’ll Love It Here Innovate with the Best Tech: Work on groundbreaking projects using AI, GenAI, LLMs, and massive-scale data platforms. Tackle challenges that push the boundaries of innovation. Impact Industry Giants: Deliver business-critical solutions for Fortune 100 clients across Hi-tech, D2C, Healthcare, SaaS, and Retail. Partner with platforms like Databricks, Google Cloud, and Adobe to create high-impact products. Collaborate with a World-Class Team: Join exceptional professionals from IITs, IIMs, NITs, and global leaders like Walmart, Amazon, Accenture, and ZS. Learn, grow, and lead in a team that values expertise and collaboration. Accelerate Your Growth: Access our Centres of Excellence to upskill and work on industry-leading innovations. Your professional development is a top priority. Work in a Culture of Excellence: Be part of a dynamic workplace that fosters creativity, teamwork, and a passion for building transformative solutions. Your contributions will be recognized and celebrated. About Our Leadership Anuj Gupta – Former Amazon leader with over 22 years of experience in building and managing large engineering teams. (B.Tech, IIT Delhi; MBA, ISB Hyderabad). Raghvendra Kushwah – Business consulting expert with 21+ years at Accenture and Cognizant (B.Tech, IIT Delhi; MBA, IIM Lucknow). Key Benefits Competitive salary and performance-based bonus. Comprehensive benefits package, including health insurance and flexible work hours. Opportunities for professional development and careers growth. Location: Gurugram Submit your resume to saurabh.bhaumik@eucloid.com with the subject line “ Application: Role Name. ” Eucloid is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies