Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As an ideal candidate, you should have a strong grasp of the Software Development Life Cycle (SDLC), Python Scripting, and Artificial Intelligence (AI) packages. Your conceptual knowledge should extend to General Artificial Intelligence (GenAI) techniques, including Prompt engineering such as Template, Curation, and modification. You should be well-versed in various foundation models related to text, image, video, and other data. Experience in training and integrating Deep Neural Network (DNN) modules, Convolutional Neural Network (CNN), and Transformer/Recurrent Neural Network (RNN) is essential for this role. Additionally, you should possess GenAI knowledge concerning Large Language Models (LLM) and Multimodal Models like Stable diffusion, Autoregression, and Diffusion Transformer for utilizing World Foundation models and peripheral development. Your expertise should also include fine-tuning (Transfer learning) techniques for Multimodal Foundation models, as well as managing Multi-modal Data pipelines involving tasks like cleaning, augmentation, and creating embeddings. You should be familiar with Data pruning/Curation and labeling in the context of AI training and validation. Furthermore, a good understanding of the Autonomous Driving (AD) domain is required, encompassing knowledge of domain stack components, features, sensor configuration, and the format and types of annotations necessary for various components in the stack for development and testing. Your comprehensive skill set in these areas will be crucial for excelling in this position.,
Posted 5 days ago
0.0 - 4.0 years
0 Lacs
delhi
On-site
We are looking for highly motivated and resourceful interns who are eager to gain hands-on experience in market research and data analysis. This internship will provide valuable insights into the world of market intelligence and will be an excellent opportunity for personal and professional growth. Roles & Responsibilities: Market Data Collection: Assist in gathering secondary market research data through various sources, including surveys, online research, public data repositories, and industry reports. Data Curation: Organize and maintain data repositories, ensuring data accuracy, completeness, and relevance. Data Analysis: Analyze collected data using statistical tools and techniques to derive insights and trends. Presentation and Report Writing: Prepare reports and presentations summarizing research findings and insights. Collaborative Work: Collaborate with team members on various projects, sharing insights and best practices. Research Support: Assist in ad-hoc research tasks and provide support to the team as needed. Job Type: Internship Contract length: 3 months Benefits: - Flexible schedule - Paid sick time - Paid time off - Work from home Schedule: - Day shift - Monday to Friday Work Location: In person Expected Start Date: 01/05/2025,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Data Engineer at GlobalLogic, you will be responsible for architecting, building, and maintaining complex ETL/ELT pipelines for batch and real-time data processing using various tools and programming languages. Your key duties will include optimizing existing data pipelines for performance, cost-effectiveness, and reliability, as well as implementing data quality checks, monitoring, and alerting mechanisms to ensure data integrity. Additionally, you will play a crucial role in ensuring data security, privacy, and compliance with relevant regulations such as GDPR and local data laws. To excel in this role, you should possess a Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. Excellent analytical, problem-solving, and critical thinking skills with meticulous attention to detail are essential. Strong communication (written and verbal) and interpersonal skills are also required, along with the ability to collaborate effectively with cross-functional teams. Experience with Agile/Scrum development methodologies is considered a plus. Your responsibilities will involve providing technical leadership and architecture by designing and implementing robust, scalable, and efficient data architectures that align with organizational strategy and future growth. You will define and enforce data engineering best practices, evaluate and recommend new technologies, and oversee the end-to-end data development lifecycle. As a leader, you will mentor and guide a team of data engineers, conduct code reviews, provide feedback, and promote a culture of engineering excellence. You will collaborate closely with data scientists, data analysts, software engineers, and business stakeholders to understand data requirements and translate them into technical solutions. Your role will also involve communicating complex technical concepts and data strategies effectively to both technical and non-technical audiences. At GlobalLogic, we offer a culture of caring, continuous learning and development opportunities, interesting and meaningful work, balance and flexibility, and a high-trust environment. By joining our team, you will have the chance to work on impactful projects, engage your curiosity and problem-solving skills, and contribute to shaping cutting-edge solutions that redefine industries. With a commitment to integrity and trust, GlobalLogic provides a safe, reliable, and ethical global environment where you can thrive both personally and professionally.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
As an AI Data Solutions Engineer at SigTuple in Bangalore, you will play a crucial role in designing and developing end-to-end data solutions for managing vast amounts of data and images generated by our AI-powered diagnostic solutions. Your responsibilities will involve collaborating with cross-functional teams, establishing a robust infrastructure, implementing best practices for data curation, and driving innovation in data engineering methodologies and technologies. You will work closely with product, regulatory, precision optics, robotics, microfluidics, data science, and cloud and edge deployment teams to ensure alignment and effectiveness of data solutions. Your role will also include serving as a liaison with stakeholders to ensure smooth functioning and alignment of data processes. Furthermore, you will be responsible for maintaining the integrity, security, and accessibility of our data assets within the organization. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Engineering, or a related field. You must have proven experience in data engineering and a strong proficiency in Python, as well as familiarity with modern data engineering tools and frameworks. Excellent communication and interpersonal skills are essential for effective collaboration across multidisciplinary teams. Experience with FDA verification and patent processes would be advantageous, and a passion for driving innovation and continuous improvement in data engineering and AI-driven solutions is highly valued. If you are a self-motivated individual with a passion for leveraging AI to revolutionize healthcare, this is an exciting opportunity to make a significant impact on the future of healthcare at SigTuple.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
SoundHound AI is seeking a Language Specialist to join the Data Programs team. In this role, you will be responsible for validating speech and text language data used to train our ASR models, particularly focusing on Kannada phonetics and phonology. You will collaborate closely with the ASR and NLU engineering teams to establish phoneme sets, pronunciation rules, and test case phrases for our Voice AI products. Your contributions will play a vital role in enhancing the voice user experience and domain performance. Key Responsibilities: - Assist in gathering and curating speech and text data for the ASR team - Establish phoneme sets and pronunciation rules for the Kannada language - Provide test case phrases for relevant domains - Enhance domain performance through artificial training data sets for Language Models - Conduct testing of systems and analyze the results to ensure optimal performance To be successful in this role, you should meet the following qualifications: - Completion of formal language studies, a language degree, or a degree in Linguistics (or equivalent experience) - Native-level proficiency in spoken and written Kannada - Familiarity with linguistics, phonetics, and phonology - Experience as a data evaluator and in training data for machine learning - Proficiency in data curation, data quality, or software QA - Knowledge of Bash, Python, or other programming languages - Background in project management/coordination - Proficiency with Google Docs, Jira, Confluence SoundHound AI is committed to being a values-driven company that prioritizes diversity, equity, inclusion, and belonging. We believe that a team with global perspectives is essential for our mission to build Voice AI for the world. If you are passionate about language validation, phonetics, and contributing to cutting-edge technology, we invite you to join our team and make a difference in the world of Voice AI. Learn more about our culture and career opportunities at https://www.soundhound.com/careers.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
You will be responsible for providing excellent customer service and exceeding customer expectations. Your main tasks will include ensuring logical and meaningful extraction of URL content with 100% SLA adherence, preparing action plans for process improvements, and delivering high-quality extractions consistently without fail. It is essential to develop domain expertise, challenge discrepancies, and conduct high-quality Root Cause Analyses (RCAs) to identify recurring high severity issues. Ideally, you should have 2 to 3 years of experience in customer support or technical support in the online industry, particularly with processes related to data curation extraction. A good understanding of websites and navigation to retrieve desired information is required. You must possess excellent communication skills, both written and oral, decision-making abilities, basic knowledge of Excel sheets, logical analytical skills, and a keen eye for details. Time management skills, proactive organization, and a successful track record in a team environment are essential for this role.,
Posted 3 weeks ago
2.0 - 6.0 years
0 Lacs
kochi, kerala
On-site
As a Support Engineer in our AI Enablement team at InfoPark, Cochin, you will be responsible for bridging the gap between AI technology and customer support operations in the homeowners insurance domain. Your role will involve training and deploying AI models to automate support tasks, optimizing intelligent tools for underwriting and claims workflows, and enhancing customer and agent interactions across digital channels while also supporting existing manual tasks. Your key responsibilities will include preparing, labeling, and validating underwriting and claims support data for AI model training, collaborating with AI engineers to automate routine tasks, monitoring AI performance in production, and providing feedback for continuous improvement. You will also act as a liaison between the AI development team and customer support teams, onboard support staff on AI-driven tools, and ensure system maintenance and troubleshooting to support the stability of AI tools across platforms. To be successful in this role, you should have at least 2 years of experience in technical support, customer support, or insurance operations, preferably in homeowners insurance. Additionally, you should possess a basic understanding of AI/ML concepts, particularly Natural Language Processing (NLP), and have experience with support platforms like Zendesk, HubSpot, or Salesforce. Strong analytical, troubleshooting skills, and excellent interpersonal and communication skills are also essential. Preferred qualifications include experience in labeling or training datasets for AI/chatbot models, exposure to tools like chatGPT, Gemini, Copilot, knowledge of data privacy practices, compliance standards in the insurance sector, and basic proficiency in Python or SQL for data handling. Join us to play a central role in transforming the insurance industry with AI, collaborate with global teams, work in a modern, innovation-driven environment at InfoPark, Cochin, and enjoy a flexible, inclusive work culture with growth opportunities in AI and insurance technology. This full-time position in Kochi requires a minimum of 2 years of relevant experience, and you should be ready to relocate to Kochi and work from the office immediately. You will report to the Director of AI Operations / Support Automation Lead. Qualifications: Any Graduates, preferably in finance or IT field.,
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
kerala
On-site
As a Senior AI Engineer (Tech Lead) at EY, you will have the opportunity to leverage your technical expertise to develop and implement cutting-edge AI solutions. With a minimum of 4 years of experience in Data Science and Machine Learning, including NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture, you will play a crucial role in driving innovation and creating impactful solutions for enterprise industry use cases. Your responsibilities will include contributing to the design and implementation of state-of-the-art AI solutions, leading a team of 4-6 developers, and collaborating with stakeholders to identify business opportunities and define AI project goals. You will stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. By utilizing generative AI techniques, such as LLMs and Agentic Framework, you will develop innovative solutions tailored to specific business requirements. Your role will also involve integrating with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to enhance generative AI capabilities. Additionally, you will be responsible for implementing and optimizing end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Your expertise in data engineering, DevOps, and MLOps practices will be valuable in curating, cleaning, and preprocessing large-scale datasets for generative AI applications. To excel in this role, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, and demonstrate proficiency in Python, Data Science, Machine Learning, OCR, and document intelligence. Strong collaboration with software engineering and operations teams, along with excellent problem-solving and analytical skills, will be essential in translating business requirements into technical solutions. Moreover, your familiarity with trusted AI practices, data privacy, security, and ethical considerations will ensure the fairness, transparency, and accountability of AI models and systems. Additionally, having a solid understanding of NLP techniques, frameworks like TensorFlow or PyTorch, and cloud platforms such as Azure, AWS, or GCP will be beneficial in deploying AI solutions in a cloud environment. Proficiency in designing or interacting with agent-based AI architectures, implementing optimization tools and techniques, and driving DevOps and MLOps practices will also be advantageous in enhancing the performance and efficiency of AI models. Join EY to build an exceptional experience for yourself and contribute to creating a better working world for all through the power of AI and technology.,
Posted 3 weeks ago
5.0 - 9.0 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Hybrid
Databricks Developer Primary Skill : Azure data factory, Azure databricks Secondary Skill: SQL,,Sqoop,Hadoop Experience: 5 to 9 years Location: Chennai, Bangalore ,Pune, Coimbatore Requirements: Cloud certified in one of these categories Azure Data Engineer Azure Data Factory , Azure Data bricks Spark (PySpark or scala), SQL, DATA Ingestion, Curation Semantic Modelling/ Optimization of data model to work within Rahona Experience in Azure ingestion from on-prem source, e.g. mainframe, SQL server, Oracle. Experience in Sqoop / Hadoop Microsoft Excel (for metadata files with requirements for ingestion) Any other certificate in Azure/AWS/GCP and data engineering hands-on experience in cloud Strong Programming skills with at least one of Python, Scala, or Java
Posted 3 weeks ago
2.0 - 3.0 years
2 - 5 Lacs
Bengaluru
Hybrid
Build 5+ ecosystem insight dashboards (heatmaps, scoring models, benchmarking) Create data-driven narratives to support platform GTM and delivery Identify anomalies and trends in vendor, location, and regulatory intelligence Define and track cross-foundation KPIs in collaboration with stakeholders Ensure >95% accuracy in analytical reports and predictive inputs Competencies: SQL, Excel, BI tools (PowerBI, Tableau, Looker) Strong statistical reasoning and pattern recognition Analytical storytelling, dashboard UX, and metric structuring Quality control, collaborative iteration, and agile data mindset
Posted 3 weeks ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
At Goldman Sachs, our Engineers are dedicated to making the impossible possible. We are committed to changing the world by bridging the gap between people and capital with innovative ideas. Our mission is to tackle the most complex engineering challenges for our clients, crafting massively scalable software and systems, designing low latency infrastructure solutions, proactively safeguarding against cyber threats, and harnessing the power of machine learning in conjunction with financial engineering to transform data into actionable insights. Join our engineering teams to pioneer new businesses, revolutionize finance, and seize opportunities in the fast-paced world of global markets. Engineering at Goldman Sachs, consisting of our Technology Division and global strategists groups, stands at the heart of our business. Our dynamic environment demands creative thinking and prompt, practical solutions. If you are eager to explore the limits of digital possibilities, your journey starts here. Goldman Sachs Engineers embody innovation and problem-solving skills, developing solutions in various domains such as risk management, big data, and mobile technology. We seek imaginative collaborators who can adapt to change and thrive in a high-energy, global setting. The Data Engineering group at Goldman Sachs plays a pivotal role across all aspects of our business. Focused on offering a platform, processes, and governance to ensure the availability of clean, organized, and impactful data, Data Engineering aims to scale, streamline, and empower our core businesses. As a Site Reliability Engineer (SRE) on the Data Engineering team, you will oversee observability, cost, and capacity, with operational responsibility for some of our largest data platforms. We are actively involved in the entire lifecycle of platforms, from design to decommissioning, employing an SRE strategy tailored to this lifecycle. We are looking for individuals who have a development background and are proficient in code. Candidates should prioritize Reliability, Observability, Capacity Management, DevOps, and SDLC (Software Development Lifecycle). As a self-driven leader, you should be comfortable tackling problems with varying degrees of complexity and translating them into data-driven outcomes. You should be actively engaged in strategy development, participate in team activities, conduct Postmortems, and possess a problem-solving mindset. Your responsibilities as a Site Reliability Engineer (SRE) will include driving the adoption of cloud technology for data processing and warehousing, formulating SRE strategies for major platforms like Lakehouse and Data Lake, collaborating with data consumers and producers to align reliability and cost objectives, and devising strategies with data using relevant technologies such as Snowflake, AWS, Grafana, PromQL, Python, Java, Open Telemetry, and Gitlab. Basic qualifications for this role include a Bachelor's or Master's degree in a computational field, 1-4+ years of relevant work experience in a team-oriented environment, at least 1-2 years of hands-on developer experience, familiarity with DevOps and SRE principles, experience with cloud infrastructure (AWS, Azure, or GCP), a proven track record in driving data-oriented strategies, and a deep understanding of data multi-dimensionality, curation, and quality. Preferred qualifications entail familiarity with Data Lake / Lakehouse technologies, experience with cloud databases like Snowflake and Big Query, understanding of data modeling concepts, working knowledge of open-source tools such as AWS Lambda and Prometheus, and proficiency in coding with Java or Python. Strong analytical skills, excellent communication abilities, a commercial mindset, and a proactive approach to problem-solving are essential traits for success in this role.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
You will be responsible for designing, building, and deploying scalable NLP/ML models for real-world applications. Your role will involve fine-tuning and optimizing Large Language Models (LLMs) using techniques like LoRA, PEFT, or QLoRA. You will work with transformer-based architectures such as BERT, GPT, LLaMA, and T5, and develop GenAI applications using frameworks like LangChain, Hugging Face, OpenAI API, or RAG (Retrieval-Augmented Generation). Writing clean, efficient, and testable Python code will be a crucial part of your tasks. Collaboration with data scientists, software engineers, and stakeholders to define AI-driven solutions will also be an essential aspect of your work. Additionally, you will evaluate model performance and iterate rapidly based on user feedback and metrics. The ideal candidate should have a minimum of 3 years of experience in Python programming with a strong understanding of ML pipelines. A solid background and experience in NLP, including text preprocessing, embeddings, NER, and sentiment analysis, are required. Proficiency in ML libraries such as scikit-learn, PyTorch, TensorFlow, Hugging Face Transformers, and spaCy is essential. Experience with GenAI concepts, including prompt engineering, LLM fine-tuning, and vector databases like FAISS and ChromaDB, will be beneficial. Strong problem-solving and communication skills are highly valued, along with the ability to learn new tools and work both independently and collaboratively in a fast-paced environment. Attention to detail and accuracy is crucial for this role. Preferred skills include theoretical knowledge or experience in Data Engineering, Data Science, AI, ML, RPA, or related domains. Certification in Business Analysis or Project Management from a recognized institution is a plus. Experience in working with agile methodologies such as Scrum or Kanban is desirable. Additional experience in deep learning and transformer architectures and models, prompt engineering, training LLMs, and GenAI pipeline preparation will be advantageous. Practical experience in integrating LLM models like ChatGPT, Gemini, Claude, etc., with context-aware capabilities using RAG or fine-tuning models is a plus. Knowledge of model evaluation and alignment, as well as metrics to calculate model accuracy, is beneficial. Data curation from sources for RAG preprocessing and development of LLM pipelines is an added advantage. Proficiency in scalable deployment and logging tooling, including skills like Flask, Django, FastAPI, APIs, Docker containerization, and Kubeflow, is preferred. Familiarity with Lang Chain, LlamaIndex, vLLM, HuggingFace Transformers, LoRA, and a basic understanding of cost-to-performance tradeoffs will be beneficial for this role.,
Posted 1 month ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
At Goldman Sachs, our Engineers don't just make things - we make things possible. We change the world by connecting people and capital with ideas, solving the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering, which is comprised of our Technology Division and global strategists groups, is at the critical center of our business. Our dynamic environment requires innovative strategic thinking and immediate, real solutions. If you want to push the limit of digital possibilities, start here. Goldman Sachs Engineers are innovators and problem-solvers, building solutions in risk management, big data, mobile, and more. We look for creative collaborators who evolve, adapt to change, and thrive in a fast-paced global environment. Data plays a critical role in every facet of the Goldman Sachs business. The Data Engineering group is at the core of that offering, focusing on providing the platform, processes, and governance for enabling the availability of clean, organized, and impactful data to scale, streamline, and empower our core businesses. As a Site Reliability Engineer (SRE) on the Data Engineering team, you will be responsible for observability, cost, and capacity with operational accountability for some of Goldman Sachs's largest data platforms. We engage in the full lifecycle of platforms from design to demise with an adapted SRE strategy to the lifecycle. We are looking for individuals with a background as a developer who can express themselves in code. You should have a focus on Reliability, Observability, Capacity Management, DevOps, and SDLC (Software Development Lifecycle). As a self-leader comfortable with problem statements, you should structure them into data-driven deliverables. You will drive strategy with skin in the game, participate in the team's activities, drive Postmortems, and have an attitude that the problem stops with you. **How You Will Fulfil Your Potential** - Drive adoption of cloud technology for data processing and warehousing - Drive SRE strategy for some of GS's largest platforms including Lakehouse and Data Lake - Engage with data consumers and producers to match reliability and cost requirements - Drive strategy with data **Relevant Technologies**: Snowflake, AWS, Grafana, PromQL, Python, Java, Open Telemetry, Gitlab **Basic Qualifications** - A Bachelor's or Master's degree in a computational field (Computer Science, Applied Mathematics, Engineering, or in a related quantitative discipline) - 1-4+ years of relevant work experience in a team-focused environment - 1-2 years hands-on developer experience at some point in career - Understanding and experience of DevOps and SRE principles and automation, managing technical and operational risk - Experience with cloud infrastructure (AWS, Azure, or GCP) - Proven experience in driving strategy with data - Deep understanding of multi-dimensionality of data, data curation, and data quality - In-depth knowledge of relational and columnar SQL databases, including database design - Expertise in data warehousing concepts - Excellent communication skills - Independent thinker, willing to engage, challenge, or learn - Ability to stay commercially focused and to always push for quantifiable commercial impact - Strong work ethic, a sense of ownership and urgency - Strong analytical and problem-solving skills - Ability to build trusted partnerships with key contacts and users across business and engineering teams **Preferred Qualifications** - Understanding of Data Lake / Lakehouse technologies incl. Apache Iceberg - Experience with cloud databases (e.g., Snowflake, Big Query) - Understanding concepts of data modeling - Working knowledge of open-source tools such as AWS lambda, Prometheus - Experience coding in Java or Python,
Posted 1 month ago
1.0 - 3.0 years
0 - 0 Lacs
Pune
Work from Office
Role & responsibilities: 1) Prospect Database Aggregation & Management Source high-potential contacts using LinkedIn Sales Navigator, Apollo, ZoomInfo, Crunchbase, and intent data tools. Build and maintain a structured prospect database segmented by industry, region, company size, persona, and readiness. Enrich records with firmographic, technographic, funding, and hiring signals. Clean, deduplicate, and prepare records for CRM ingestion and MAP sync. 2) Suspect List Creation & ICP-Based Segmentation Apply Coditas ICP scoring logic to flag high-potential accounts and contacts. Create suspect-to-MQL pipelines with clear segmentation by vertical and persona. Continuously refine segments based on campaign response and conversion data. 3) Inbound Lead Qualification & MQL Ownership Qualify inbound leads from website, chatbot, gated content, and events. Execute timely follow-ups using guided discovery, pain-point probing, and urgency qualification. Classify leads into MQLs, SALs, or nurture/disqualify. Coordinate with Sales for MQL handoff and feedback loop. 4) MQL List Compilation & Conversion Insight Generate weekly MQL reports tagged by engagement behavior, ICP score, and funnel intent. Track suspectMQL and MQLSAL conversion trends; participate in campaign retros. 5) CRM Hygiene & Data Integrity Operations Ensure CRM is accurately updated with verified data, enrichment fields, qualification notes, and attribution history. Help audit MAP sync, lead routing rules, and ICP tagging accuracy. Flag anomalies and data gaps proactively to RevOps/Data teams. 6) Funnel Insights & GTM Feedback Loops Monitor funnel drop-offs and conversion bottlenecks. Recommend adjustments to targeting, qualification, and handoff processes. Share actionable feedback on campaign performance, messaging resonance, and persona-level engagement. Sample Outputs & Deliverables Prospect Database Tracker segmented by ICP Weekly SuspectMQL Conversion Report Inbound Qualification Playbook scripts, objection handling, ICP flags MQL Scorecard lead quality analysis by campaign/persona Lead Feedback Log – reasons for MQL rejection by Sales
Posted 1 month ago
8.0 - 13.0 years
40 - 45 Lacs
Hyderabad
Work from Office
Role Summary: The Lead Data Scientist, Foundational Analytics is pivotal in ensuring that core data assets support the growth of our business. The appointed Lead will be required to apply their experience in managing a range of analytical resources and functions to develop a centre of excellence for analytic capabilities, fostering a culture of innovation, collaboration and expertise. They will work collaboratively with key stakeholders across the business to understand and set appropriate analytical strategies that support business growth, and apply these strategies to their business planning, and in particular support business growth initiatives through allocation of appropriate resources, subject matter expertise and long range planning. They will act as experts in developing the way we manage our foundational data assets and drive innovative analytical solutions that create value. They will also work with Data Engineers and implementation teams on solution design and with business stakeholders on change management. Role & responsibilities: Develop relationships with key internal and external stakeholders across Quantium to understand and support the strategic goals and key initiatives of the company Develop and execute strategy and business plans for the ongoing development of our foundational data assets Manage Lead and Senior Analysts in other verticals to achieve business plan goals on an ongoing basis across a variety of teams Set and manage aligned KPIs across one analytics team focussed on a single data partner Develop a culture of innovation and analytic excellence by focussing on development of existing team members and hiring the best talent available Drive accountability within the teams to achieve deadlines and the best possible analytic outcomes Support growth of the business through implementation of capabilities that streamline the ingestion and preparation of new data assets Support development of technology solutions that focus on automation of solutions, operational excellence and innovation by working with implementation and technology teams on solution design Key activities: Set and manage business plans for three or more foundational data assets including data curation, customer segmentation and other foundational analytics Engage with business and external stakeholders to gather feedback and requirements to drive the foundational analytics roadmap for key business initiatives Set up regular forums to engage stakeholders across the business that use these foundational data assets to receive and give feedback on current and future initiatives and ensure they are supported appropriately Manage the high-level prioritization process across teams including the use of WIPs, charter cards and business plans which are reviewed by the team on a regular basis, and manage stakeholders expectations regarding timeframes Develop and implement resource plans to support business plans and roadmaps Responsible for team recruitment, for both external and internal candidates Ensure tailored performance plans and KPIs are set, monitored and managed on a regular basis for all team members Provide coaching and support to direct reports to develop their management and analytical capabilities Monitor and approve technical designs that implement analytical outcomes and models Drive a culture of innovation by engaging with analysts across the team through a variety of forums including one to ones, skip level meetings, lunch time sessions and analytical showcases Foster a cohesive team spirit through team meetings, awards and strong communication Taking a structured and analytical approach in troubleshooting data issues and problems Performance manager for junior team members and enabling the team to succeed by resolving issues and building on strengths Preferred candidate profile: Strong background and understanding of data structures, data manipulation and transformation Previous experience in the fields of data science or high-level data analysis; data modelling experience highly regarded Previous experience working directly with Big Data Engineers highly regarded Previous experience with project management and people management Proven ability and experience in optimizing performance of an existing process with a can-do attitude Desire to work with a team of feedback-driven analysts and developers in a cross-functional and agile environment Lead level data science/data analytics and management of one or more team of analysts Variety of analytical roles preferred, including consulting Skills Required: Sound knowledge of technical analytics discipline, including data preparation, feature engineering and foundational analytics concepts, preferably including model development and model training Sufficient skills in several disciplines or techniques to autonomously apply without the need for material guidance or revision by others; and Sufficient skills in at least one discipline or technique to be an authoritative source of guidance and support for the team, and to be able to solve problems of a high technical complexity in this domain Strong problem-solving skills, including ability to apply a systematic problem-solving approach Solid interpersonal skills, as the role will need to work in a cross-functional team with both other analysts and specialists from non-analytics disciplines Very strong client and stakeholder management skills Ability to autonomously carry out work following analytics best practice as defined at Quantium for the relevant types of tools or techniques, suggesting improvements where appropriate Ability to motivate peers to follow analytics best practice as defined at Quantium, by positively presenting the benefits and importance of these ways of working Commercial acumen to understand business needs and the commercial impacts of different analytics solutions or approaches Attention to detail Drive for continuous improvement
Posted 1 month ago
5.0 - 9.0 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Hybrid
Databricks Developer Primary Skill : Azure data factory, Azure databricks Secondary Skill: SQL,,Sqoop,Hadoop Experience: 5 to 9 years Location: Chennai, Bangalore ,Pune, Coimbatore Requirements: Cloud certified in one of these categories Azure Data Engineer Azure Data Factory , Azure Data bricks Spark (PySpark or scala), SQL, DATA Ingestion, Curation Semantic Modelling/ Optimization of data model to work within Rahona Experience in Azure ingestion from on-prem source, e.g. mainframe, SQL server, Oracle. Experience in Sqoop / Hadoop Microsoft Excel (for metadata files with requirements for ingestion) Any other certificate in Azure/AWS/GCP and data engineering hands-on experience in cloud Strong Programming skills with at least one of Python, Scala, or Java
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
hyderabad, telangana
On-site
As a Data Curator joining our AI development team, you will play a crucial role in managing and enhancing the datasets utilized for training and validating cutting-edge AI algorithms for X-ray image analysis. Your responsibilities will include acquiring and organizing X-ray image datasets from diverse sources, annotating and labeling images accurately, maintaining data integrity through quality control processes, collaborating with AI researchers, documenting dataset characteristics and curation processes, and staying updated on the latest trends in AI and data curation. You should possess proven experience in data curation or database management, familiarity with AI and machine learning concepts (especially in image analysis), proficiency in data annotation tools and software, familiarity with the Python programming language, and experience in the Non-Intrusive Inspection industry. It is essential to complete assigned tasks on time and in alignment with the appropriate processes. In addition to technical skills, soft skills play a vital role in this role. You should have the ability to accomplish tasks without supervision, excellent verbal and written communication skills, good collaborative skills, strong documentation skills, software process discipline, analytical skills, attention to detail, self-initiative, and self-management capabilities. To qualify for this position, you should hold a BSc/M.Sc in Data Science or a related field and have up to 2 years of experience in a related role. Stay updated with the latest trends in AI, machine learning, and data curation to continuously enhance our data management practices. Additionally, be prepared to travel for data collection, troubleshooting, and operational observation to ensure the success of our AI projects.,
Posted 1 month ago
2.0 - 6.0 years
6 - 15 Lacs
Gurugram
Remote
AI Trainer CAD & 3D Design Specialist (SolidWorks / AutoCAD / Blender) Position Overview Were on the lookout for skilled professionals proficient in SolidWorks, AutoCAD, or Blender to join our AI Training initiative. As a CAD & 3D Design AI Trainer , youll record and document real-world workflows to help train intelligent AI Agents that can understand and replicate complex software usage. This is an exciting opportunity for experienced mechanical designers, architects, 3D artists, or CAD drafters to contribute to cutting-edge AI development. Key Responsibilities Demonstrate real software workflows using SolidWorks, AutoCAD, or Blender (choose your specialty). Record structured, step-by-step tasks including tool usage, modeling techniques, and design logic. Highlight real-world use cases, best practices, and edge cases in your domain. Maintain high-quality standards of output while ensuring workflows are reproducible and AI-trainable. Collaborate with the AI training team to refine workflows and ensure task coverage across tools. Preferred Qualifications 2+ years of hands-on experience using one (or more) of the following tools: SolidWorks : Part modeling, assemblies, simulation, and drawings. AutoCAD : 2D drafting, 3D modeling, architectural layouts, or mechanical drawings. Blender : Modeling, shading, rendering, and use of modifiers or Geometry Nodes. Deep familiarity with shortcut keys, tool logic, and project workflows. Comfortable with screen recording software and clear, instructional narration. Familiarity with design standards, tolerances, and file organization. Prior experience creating tutorials, documentation, or training materials is a bonus. Working Environment Flexible remote work opportunity Contribute to AI innovation with real-world expertise Collaborate with a global AI research and data curation team Mandatory Application Form: https://forms.gle/YdT1Xy2cB4NhFghe9 If you're passionate about design and excited to teach AI how humans build, draw, and model we want to hear from you!
Posted 1 month ago
0.0 - 1.0 years
0 Lacs
Pune
Work from Office
Job Summary As an Intern Scientific Curator , you will assist the genomics research and product development teams by supporting literature curation, genetic data interpretation, and content development activities. This internship will provide you with hands-on experience in scientific data curation, exposure to genetic databases and pipelines, and opportunities to participate in cross-functional discussions and knowledge-sharing sessions. Key Responsibilities Support the systematic review of scientific literature to identify relevant genetic markers, gene-trait associations, and clinical insights. Assist in extracting, organizing, and annotating genetic and genomic data using standardized formats and controlled vocabularies. Help maintain internal genetic databases, ensuring consistency, accuracy, and traceability of curated content. Work alongside the bioinformatics team to learn how curated data is integrated into analysis pipelines. Contribute to drafting and reviewing technical and scientific content for internal presentations, training materials, and marketing support. Join knowledge-sharing discussions, taking notes and supporting post-meeting documentation or action items. Support in conducting market analysis and support evaluation of global genomics trends and new test ideas. Learning Outcomes Gain practical exposure to genomic data interpretation and literature curation. Understand how curation feeds into bioinformatics workflows and report generation. Learn professional documentation practices and scientific communication. Get mentored by experienced professionals across scientific, bioinformatics, and product teams.
Posted 1 month ago
2.0 - 3.0 years
1 - 3 Lacs
Gurugram
Work from Office
Job Summary Provide excellent customer service and delight customers by exceeding their expectations. Ensure logical and meaningful extraction of url content along with 100% SLA adherence. Prepare and implement action plans to drive process improvements. Deliver high quality of extraction consistently wt fail and ensure process compliance. Develop domain expertise and work like an SME. Challenge what seems not right! Perform high quality RCAs to identify the root causes of repeating high severity issues Responsibilities Preferable-Customer support or technical support experience in online industry 2 to 3 years preferably with processes related to data curation extraction Good knowledge of websites and how to navigate to obtain desired information Excellent Communication Skills both written & oral Decision making skills Basic knowledge of excel sheets Strong Logical Analytical skills cognitive ability Focus on accuracy eye for details Excellent time management skills most workload involves fluctuation Extremely proactive and organized with a track record of success in a team environment
Posted 1 month ago
3.0 - 8.0 years
12 - 15 Lacs
Faridabad
Work from Office
Kindly visit the URL https://www.rcb.res.in/current-opportunities to apply for various positions under the project entitled Setting up of the Indian Biological Data Centre (IBDC)”, Regional Centre for Biotechnology (Address: - 3rd Milestone, Faridabad Gurgaon Expressway, Faridabad, Haryana, 121001, Faridabad, Haryana, India): - The eligibility, Job description, emoluments and other terms & conditions are as under: - 1) Database Manager – 1 Vacancy Essential Qualification & experience: - PhD with three (03) years of experience, or MTech in CS/IT/E&C/ Bioinformatics/ Computational Biology/Data Science with eight (08) years of experience in an academic setting/industry in design/ implementation of modern database technologies, HPC and parallel computing, cloud computing etc. Job Description Supervise and lead the software developers and work in close coordination with domain experts for database design, application/ portal development, and development of APIs etc. for various biological databases to be developed at IBDC. Interact closely with groups managing hardware/OS of IBDC computing resources. Monthly Consolidated Emoluments up to Rs. 1,24,541/- per month based on the relevant experience of the candidate. Maximum Age limit: 45 Years. 2) Database Engineer/ Software Developer – 2 Vacancies Essential Qualification & experience: - MTech (in any branch of engineering/ science) with three (03) years of experience, or BTech /MSc in any branch of engineering/science or MCA with five (05) years of experience in database/ software design/ development and programming skills relating to modern database technologies, HPC and parallel computing, cloud computing etc. in IT industry or academic setup. Job Description Database/portal development and installation/testing for new databases/applications developed at IBDC. Very good web development experience in Java, PHP and Python. Database systems: PostGres, MongoDb, HTML, CSS, JavaScript, NodeJs Mirroring/porting of leading biological data repositories at IBDC. Full stack developer including above mentioned skills. Monthly Consolidated Emoluments up to Rs. 1,07,593/- per month based on the relevant experience of the candidate. Maximum Age limit: 45 Years. 3) Data Curator – 3 Vacancies Essential Qualification & experience: - MTech in CS/IT/E&C/Bioinformatics/Computational Biology/Data Science with four (04) years of experience, or MCA/BTech in CS/IT/E&C/ Bioinformatics/ Computational Biology/ Data Science, or MSc (Biological Sciences) with six (06) years of experience in the area of bioinformatics/ computational biology/data science, genomics/proteomics/metabolomics, structural biology, biodiversity etc. Job Description Experience of programing in ‘R’, Python, SQL. Bioinformatic data analysis (NGS, proteomics, metabolomics etc.) Natural Language Processing (NLP). Curation and mining of different types of biological data under the supervision of domain experts. Monthly Consolidated Emoluments up to Rs. 89,875/- per month based on the relevant experience of the candidate. Maximum Age limit: 40 Years. 4) System Administrator – 2 Vacancies Essential Qualification & experience: - MTech (CS/IT/SE/E&C) with two (02) years of experience, or BTech /MSc in CS/IT/SE/CA/DS/E&C/E&T or MCA with three (03) years of experience in similar infrastructure. Good experience in LINUX environment, experience and knowledge about parallel/distributed computing environments, workload management software in HPC clusters and SMP machines, latest HPC cluster management tools, tools for measuring performance of parallel applications in HPC setup and very good knowledge of state-of-the-art CPU/ GPU servers for HPC environment would be preferable. Desirable: RHCE Job Description Install and configure software and hardware, upgrade systems with new releases. Manage network servers and technology tool, monitor performance and maintain systems according to requirements, set up accounts and workstations. Troubleshoot issues and outages, ensure security through access controls, backups and firewalls. Develop expertise to train staff on new technologies. Build an internal wiki with technical documentation, manuals and IT policies. Monthly Consolidated Emoluments up to Rs. 89,875/- per month based on the relevant experience of the candidate. Maximum Age limit: 45 Years. 5) Storage Administrator – 2 Vacancies Essential Qualification & experience: - MTech (CS/IT/SE/E&C) with two (02) years of experience, or BTech /MSc in CS/IT/SE/CA/DS/E&C/E&T or MCA with three (03) years of experience in managing and maintaining the SAN/NAS, Backup/ Recovery in data centers. Proficient in understanding the storage products and advance troubleshooting and ability to implement High Availability, Work Load Management and Fail Over, Disaster recovery. Experience in executing upgrades, capacity increases, and reallocation of storage resources. Ability to work proficiently within server operating systems (Windows/ Linux/Solaris/AIX) including file system, volume management software, and write scripts and automate tasks. Knowledge of Fiber Channel configurations in storage hardware (HBAs), Fibre Channel switches, including multipath configuration and SAN fabric management. Desirable: Storage Certifications from Leading Manufacturers. Job Description Analyze and resolve called out problems, ensure integrity of supported environment. Actively and pre-emptively figure out possible faults and causes by the execution of pre-determined health checks. Handle and resolve problems identified by system admins or monitoring software. Work with change and problem management tools. Experience with capacity management and planning and performance monitoring. Hardware /software fault detection and vendor liaison. Analyze and resolve raised problems within target. Investigate, identify and document standard operating procedures that will improve application recoverability. Responsible for Backup & Recovery, Implementation of optimum backup plans and overall backup management. Monthly Consolidated Emoluments up to Rs. 89,916/- per month based on the relevant experience of the candidate. Maximum Age limit: 45 Years. 6) Senior Programmer – 2 Vacancies Essential Qualification & experience: - MTech (CS/IT/SE/E&C) with three (03) years of experience, or BTech /MSc in CS/IT/SE/CA/DS/E&C/E&T or MCA with five (05) years of experience in programming, in addition to proficiency in a widely used programming language like C++, JAVA, PERL, Python, JSP, SQL etc. Experience in Data as a Service and Continuous Analytics as a Service, leveraging Artificial Intelligence and advanced analytics. Job Description Tech Lead in deep learning to port cutting-edge Sentiment Analysis/ deep learning/ machine learning/ other models onto multi-GPU servers. Conversion of existing CPU based models to GPU accelerated models. Research and development on GPU based Databases like MapD, Optimization of existing computer vision application using GPU profilers. Very good web development experience in Java, PHP and Python. Database systems: PostGres, MongoDb etc. Monthly Consolidated Emoluments up to Rs. 74,833/- per month based on the relevant experience of the candidate. Maximum Age limit: 45 Years. 7) Technical Assistant – 2 Vacancies Essential Qualification & experience: - MTech (CS/IT/SE/E&C) or BTech /MSc in CS/IT/SE/CA/DS/E&C/E&T or MCA with two (02) years of experience in hardware/network management. Job Description Data entry operations Providing support in usage/management of IT infrastructure Monthly Consolidated Emoluments up to Rs. 41,041/- per month based on the relevant experience of the candidate. Maximum Age limit: 45 Years. About the Centre Regional Centre for Biotechnology (RCB) is an institution of national importance and statutory body established by the Department of Biotechnology, Govt. of India with regional and global partnerships synergizing with the programmes of UNESCO as a Category II Centre. The primary focus of RCB is to provide world class education, training and conduct innovative research at the interface of multiple disciplines to create high quality human resource in disciplinary and interdisciplinary areas of biotechnology in a globally competitive research milieu. RCB, with support from the Department of Biotechnology, Govt. of India, has established the Indian Biological Data Centre (IBDC) for storage and distribution of biological data generated across the nation. The ‘Indian Biological Data Centre (IBDC)’ is the first national repository for life science data in India. IBDC is mandated to archive all life science data generated from publicly funded research in India. The data centre is supported by the Government of India (GOI) through the Department of Biotechnology (DBT). It is established at the Regional Centre of Biotechnology (RCB), Faridabad, in the national capital region in close collaboration with the National Institute of Immunology, the International Centre for Genetic Engineering & Biotechnology and the National Informatics Centre (NIC), India. The Executive Director, RCB also serves as the Lead Coordinator of the IBDC project. Centre Address Address:- 3rd Milestone, Faridabad Gurgaon Expressway, Faridabad, Haryana, 121001, Faridabad, Haryana, India
Posted 1 month ago
4.0 - 5.0 years
5 - 7 Lacs
Bengaluru
Work from Office
Part of the Oncology Clinical Team, working on building and developing Oncology Data dictionaries, liquid tumor pipeline from scratch and SOP creation by acting as an SME for the liquid tumor portfolio, along with secondary duties of Curation/Abstraction etc. Position Summary: We are seeking a highly skilled and experienced Data Curation Subject Matter Expert (SME) in Liquid Cancers to join our Oncology Clinical Team. This role will focus on the review of clinical data specifically related to hematologic malignancies (e.g., leukemia, lymphoma, myeloma), and contribution to the development of data dictionaries, SOPs, and AI training pipelines to support oncology trials. Abstraction and curation may also be required based on project requirements. Essential Job Functions: Serve as the oncology data curation SME for various liquid tumor types. Develop and maintain liquid tumor data libraries; coordinate with cross-functional teams to meet project-specific requirements - from a data curation/RWE perspective. Contribute proactively to pipeline optimization for liquid tumors. Apply critical thinking to identify, troubleshoot, and escalate issues during data pipeline operations. Proactively contribute to streamlining processes specifically to the liquid tumor pipeline. Ability and flexibility to render support to other projects when there is a team requirement. Conduct additional duties as assigned, including review of selected medical records to assess eligibility for new project, performing second level reviews/QC review of medical records. Screening of cases for relevant clinical trials across various projects and/or disease subtypes. Knowledge related to cancer trials is essential and required, including understanding their inclusion and exclusion criteria. Respond promptly to queries issued by the Lead Data Abstractor, Operations Manager or other project personnel. Access electronic data systems for review of medical records and enter specific data into congruent electronic data systems. Ability to understand the Inclusion, Exclusion criteria, adverse event, hospitalization data, medications details and categorize the data accordingly. Review project specific documents, as needed, to develop familiarity with project goals and with the Abstractor tasks in each project. Ability to understand the diagnostic, pathology and other reports and obtain the exact information required according to trial specific SOPs. Use other resources as needed to gain the knowledge required to perform Abstractor work on new projects. Share medical knowledge and project-specific procedural knowledge with other Data Abstractors as needed. Qualification: Minimum 4-5 years experience across data abstraction/curation, with a strong focus on liquid tumors and/or expertise is essential - Previous experience in a data curation SME role is desirable. In depth knowledge across the liquid tumor domain with a solid understanding of foundational liquid tumor concepts is also essential. Oncology experience is preferable; familiarity with how cancer is treated from diagnosis through treatment and recovery and an understanding of cancer terminology is essential. Experience in clinical research or related fields, especially in oncology trials is preferred. An advanced level of clinical knowledge associated with chronic disease states is required. Experience in clinical research or reviewed medical data for clinical trials. Certification done in clinical research or clinical data management. Education: B.Sc (Nursing), BDS, BAMS, BHMS, MDS, BPharm, MPharm Skills and Abilities: Flexibility: Flexibility in coping with changing work assignments, changing project requirements, varying training meeting schedules, and database resources that may not always function optimally. Innovation and Proactiveness: Should be able to foresee the probable functionalities and/or think out of the box as complex problems may sometimes require innovative solutions. Language: Strong communication skills both written and verbal to work with multiple internal and external clients in a fast paced environment. Reasoning: Ability to make independent judgments in abstracting medical data and the knowledge of when to seek input from other staff. Computer: Ability to create and maintain documents using Microsoft office( word, excel, outlook and powerpoint) Why Join US ? We are revolutionizing a unique industry that has the potential to impact and benefit patients from all over the world - you can create impact at scale. We have had company - sponsored workations in Bali, Sri Lanka, and Manali and take pride in our hard-working yet super fun culture . We are working on a few of the most challenging problems in a highly regulated industry which provides you an opportunity to solve some of the most interesting things. You will get a chance to work with experts from multiple industries and the best in the industry compensation.
Posted 2 months ago
0.0 - 3.0 years
3 - 4 Lacs
Pune
Work from Office
Role & responsibilities: Extract scientific and technical information and insert in numeric form from source documents as per strict input guidelines& manuals. Decoding of generically claimed compounds as per guidelines, policies and manuals. Review data for deficiencies or errors, correctness and any incompatibilities if possible and check the final output. Verify accuracy of the data entered. Store completed work in designated locations and perform any further data processing as needed. Ability to handle work in specified time limits
Posted 2 months ago
1.0 - 6.0 years
0 - 0 Lacs
Chennai
Remote
Agent Trainer (Remote) About Our Clients Our clients are leading-edge research labs and applied AI teams developing autonomous intelligence. Their teams include researchers, engineers, and operators with deep expertise in machine learning, reinforcement learning, and large-scale data operations. Together, they are building real-time computer autopilots powered by foundation models trained on large-scale video-language-action data. Role Agent Trainers are responsible for training AI Agents and enabling their computer-use capabilities. In this role, you will directly influence how these Agents interact with computers and software paving the way for a future where computers complete tasks on our behalf. You may be a fit if you: Are an expert computer user across multiple platforms and applications Have a wealth of ideas about what a computer-use agent should be able to do Are passionate about the future of artificial intelligence Have expert-level proficiency in at least one professional software tool Applying: To apply, please visit the link https://forms.gle/rjUBYARYk2zok3BH9 and fill in the form.
Posted 2 months ago
1.0 - 6.0 years
0 - 1 Lacs
Kolkata
Remote
Agent Trainer (Remote) About Our Clients Our clients are leading-edge research labs and applied AI teams developing autonomous intelligence. Their teams include researchers, engineers, and operators with deep expertise in machine learning, reinforcement learning, and large-scale data operations. Together, they are building real-time computer autopilots powered by foundation models trained on large-scale video-language-action data. Role Agent Trainers are responsible for training AI Agents and enabling their computer-use capabilities. In this role, you will directly influence how these Agents interact with computers and software paving the way for a future where computers complete tasks on our behalf. You may be a fit if you: Are an expert computer user across multiple platforms and applications Have a wealth of ideas about what a computer-use agent should be able to do Are passionate about the future of artificial intelligence Have expert-level proficiency in at least one professional software tool Applying: To apply, please visit the link https://forms.gle/rjUBYARYk2zok3BH9 and fill in the form.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |