Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Req ID: 299670 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Systems Integration Analyst to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Position General Duties and Tasks: Participate in research, design, implementation, and optimization of Machine learning Models Help AI product managers and business stakeholders understand the potential and limitations of AI when planning new products Understanding of Revenue Cycle Management processes like Claims filing and adjudication Hands on experience in Python Build data ingest and data transformation platform Identify transfer learning opportunities and new training datasets Build AI models from scratch and help product managers and stakeholders understand results Analysing the ML algorithms that could be used to solve a given problem and ranking them by their success probability Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Defining validation strategies Defining the pre-processing or feature engineering to be done on a given dataset Training models and tuning their hyperparameters Analysing the errors of the model and designing strategies to overcome them Deploying models to production Create APIs and help business customers put results of your AI models into operations JD Education Bachelor’s in computer sciences or similar. Masters preferred. Skills hands on programming experience working on enterprise products Demonstrated proficiency in multiple programming languages with a strong foundation in a statistical platform such as Python, R, SAS, or MatLab. Knowledge in Deep Learning/Machine learning, Artificial Intelligence Experience in building AI models using algorithms of Classification & Clustering techniques Expertise in visualizing and manipulating big datasets Strong in MS SQL Acumen to take a complex problem and break it down to workable pieces, to code a solution Excellent verbal and written communication skills Ability to work in and define a fast pace and team focused environment Proven record of delivering and completing assigned projects and initiatives Ability to deploy large scale solutions to an enterprise estate Strong interpersonal skills Understanding of Revenue Cycle Management processes like Claims filing and adjudication is a plus About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA is an equal opportunity employer and considers all applicants without regarding to race, color, religion, citizenship, national origin, ancestry, age, sex, sexual orientation, gender identity, genetic information, physical or mental disability, veteran or marital status, or any other characteristic protected by law. We are committed to creating a diverse and inclusive environment for all employees. If you need assistance or an accommodation due to a disability, please inform your recruiter so that we may connect you with the appropriate team. Show more Show less
Posted 6 hours ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position General Duties and Tasks: Participate in research, design, implementation, and optimization of Machine learning Models Help AI product managers and business stakeholders understand the potential and limitations of AI when planning new products Understanding of Revenue Cycle Management processes like Claims filing and adjudication Hands on experience in Python Build data ingest and data transformation platform Identify transfer learning opportunities and new training datasets Build AI models from scratch and help product managers and stakeholders understand results Analysing the ML algorithms that could be used to solve a given problem and ranking them by their success probability Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Defining validation strategies Defining the pre-processing or feature engineering to be done on a given dataset Training models and tuning their hyperparameters Analysing the errors of the model and designing strategies to overcome them Deploying models to production Create APIs and help business customers put results of your AI models into operations JD Education Bachelor’s in computer sciences or similar. Skills hands on programming experience working on enterprise products Demonstrated proficiency in multiple programming languages with a strong foundation in a statistical platform such as Python, R, SAS, or MatLab. Knowledge in Deep Learning/Machine learning, Artificial Intelligence Experience in building AI models using algorithms of Classification & Clustering techniques Expertise in visualizing and manipulating big datasets Strong in MS SQL Acumen to take a complex problem and break it down to workable pieces, to code a solution Excellent verbal and written communication skills Ability to work in and define a fast pace and team focused environment Proven record of delivering and completing assigned projects and initiatives Ability to deploy large scale solutions to an enterprise estate Strong interpersonal skills Understanding of Revenue Cycle Management processes like Claims filing and adjudication is a plus Show more Show less
Posted 6 hours ago
2.0 years
0 Lacs
Delhi, India
On-site
This Job is based in Australia The Opportunity We are welcoming applications for a Postdoctoral Research Associate as part of a recently awarded ARC discovery project between UNSW Sydney and the University of Sydney. The goal of the project is to determine how hydrogen affects deformation at different microstructural features in alloys, to aid future alloy design. The UNSW team, lead by A/Prof Patrick Burr, will provide the modelling contributions, and the team at USyd, led by Prof Julie Carney and Dr Ranming Liu, is in charge of the experimental part. The two teams will collaborate closely, and the candidate is expected to integrate their modelling work within the experimental tasks. For more information on the research group of A/Prof Patrick Burr, please visit https://www.patrickburr.com/group In your role you will be responsible for performing ab-initio and molecular dynamics simulations, and for developing inter-atomic potentials using a combination of classical and machine-learning (ML) approaches (and a new hybrid method recently developed in our group). Some of the types of simulations that will be performed may include: accurate modelling of hydrogen trapping at point defects, dislocations, grain boundaries and second phases of model alloys; creation of a high-fidelity 1:1 model of an experimentally-observed atom probe dataset of ~ 1 million atoms; quantification of quantum tunnelling in hydrogen mobility; and simulating strain-driven redistribution of hydrogen within microstructural features. This position is best suited for candidates with a strong background in computational material science. The role reports to Associate Professor Patrick Burr and has no direct reports. Salary (Level A) – AUD $110,059 to $117,718 per annum + 17% superannuation Full time Fixed-term contract – 2 years Location: Kensington – Sydney, Australia About Us UNSW isn’t like other places you’ve worked. Yes, we’re a large organisation with a diverse and talented community; a community doing extraordinary things. But what makes us different isn’t only what we do, it’s how we do it. Together, we are driven to be thoughtful, practical, and purposeful in all we do. If you want a career where you can thrive, be challenged, and do meaningful work, you’re in the right place. The School of Mechanical and Manufacturing Engineering that is internationally recognised for its excellence in research and teaching. Our mission is to nurture students to become industry leaders who will generate societal, economic, and environmental benefits. The School is one of the largest and most prestigious schools Australia for thriving research programs and contribution to education excellence in Aerospace, Mechanical Engineering, Advanced Manufacturing Engineering, Robotics and Mechatronic Engineering. Our Schools QS ranking for 2023 is #49 globally and the highest in Australia The ARWU (Shanghai) Rankings for 2023 ranked the subject Mechanical Engineering at #36 globally and the highest in Australia. Aerospace Engineering at UNSW was ranked #45 globally. For further information on our school go to - https://www.unsw.edu.au/engineering/our-schools/mechanical-and-manufacturing-engineering The UNSW Nuclear Innovation Centre is a pioneering hub dedicated to advancing Australia’s nuclear science industry. Launched in February 2024, the Centre fosters cross-disciplinary and cross-industry collaborations, focusing on areas such as medicine, irradiated materials, waste management, space exploration, and mining. By bringing together experts from various fields, the Centre aims to drive innovation, develop a skilled workforce, and nurture future leaders. Its mission is to enhance research, education, and training, ensuring the prosperity and competitiveness of Australia’s nuclear technology sector. For more information please visit - https://www.unsw.edu.au/research/nuclear-innovation-centre Skills & Experience A PhD in a related discipline, and/or relevant work experience. Strong coding skills in commonly used scientific languages (e.g. Python, Matlab, shell script, C) Demonstrated experience in performing simulations at the atomic-scale, including density functional theory (e.g. VASP, Ab-init, Quantum espresso), and molecular dynamics (e.g. LAMMPS, DL-POLY) Knowledge of development of inter-atomic potentials – classical or ML. Proven commitment to proactively keeping up to date with discipline knowledge and developments. Demonstrated ability to undertake high quality academic research and conduct independent research with limited supervision. Demonstrated track record of publications and conference presentations relative to opportunity. Evidence of supervision or mentoring of students is desirable. Evidence of highly developed interpersonal skills, ability to work in a team, collaborate across disciplines and build effective relationships. An understanding of and commitment to UNSW’s aims, objectives and values in action, together with relevant policies and guidelines. Knowledge of health and safety responsibilities and commitment to attending relevant health and safety training. Additional details about the specific responsibilities for this position can be found in the position description. This is available via JOBS@UNSW. To Apply: Please click the apply now button and submit your CV, Cover Letter and Responses to the Skills and Experience. You should systematically address the Skills and Experience listed within the position description in your application. Please note applications will not be accepted if sent to the contact listed below. Contact : Eugene Aves – Talent Acquisition Consultant E: eugene.aves@unsw.edu.au Applications close: 11:55 pm (Sydney time) on Monday 11th August 2025 UNSW is committed to evolving a culture that embraces equity and supports a diverse and inclusive community where everyone can participate fairly, in a safe and respectful environment. We welcome candidates from all backgrounds and encourage applications from people of diverse gender, sexual orientation, cultural and linguistic backgrounds, Aboriginal and Torres Strait Islander background, people with disability and those with caring and family responsibilities. UNSW provides workplace adjustments for people with disability, and access to flexible work options for eligible staff. The University reserves the right not to proceed with any appointment. Show more Show less
Posted 6 hours ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
What makes this role special Join a green-field Enterprise solutions project that spans cloud-infra, data pipelines, QA automation, BI dashboards and business process analysis. Spend your first year rotating through four pods, discovering where you shine, then lock into the stream you love (DevOps, Data Engineering, QA, BI, or Business Analysis). Work side-by-side with senior architects and PMs; demo every Friday; leave with production-grade experience most freshers wait years to gain. Rotation roadmap (three months each) DevOps Starter – write Terraform variables, tweak Helm values, add a GitHub Action that auto-lints PRs. Data Wrangler – build a NiFi flow (CSV → S3 Parquet), add an Airflow DAG, validate schemas with Great Expectations. QA Automation – write PyTest cases for the WhatsApp bot, create a k6 load script, plug Allure reports into CI. BI / Business Analysis – design a Superset dataset & dashboard, document KPIs, shadow the PM to craft a user story and UAT sheet. Day-to-day you will Pick tickets from your pod’s board and push clean pull-requests or dashboard changes. Pair with mentors, record lessons in the wiki, and improve run-books as you go. Demo your work (max 15 min) in our hybrid Friday huddle. Must-have spark Basic coding in Python or JavaScript and Git fundamentals (clone → branch → PR). Comfortable with SQL JOINs & GROUP BY and spreadsheets for quick analysis. Curious mindset, clear written English, happy to ask “why?” and own deadlines. Bonus points A hobby Docker or AWS free-tier project. A Telegram/WhatsApp bot or hackathon win you can show. Contributions to open-source or a college IoT demo. What success looks like Ship at least twelve merged PRs/dashboards in your first quarter. Automate one manual chore the seniors used to dread. By month twelve you can independently take a user story from definition → code or spec → test → demo. Growth path Junior ➜ Associate II ➜ Senior (lead a pod); pay and AWS certifications climb with you. How to apply Fork github.com/company/erpnext-starter, fix any “good-first-issue”, open a PR. Email your resume, PR link, and a 150-word story about the coolest thing you’ve built. Short-listed candidates get a 30-min Zoom chat (no riddles) and a 24-hr mini-task aligned to your preferred first rotation. We hire attitude over pedigree—show you learn fast, document clearly, and love building, and you’re in. Show more Show less
Posted 12 hours ago
3.0 - 4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Data Engineer (3-4 Years Experience) - Real-time & Batch Processing | AWS, Kafka, Click House, Python Location : NOIDA. Experience : 3-4 years. Job Type : Full-Time. About The Role We are looking for a skilled Data Engineer with 3-4 years of experience to design, build, and maintain real-time and batch data pipelines for handling large-scale datasets. You will work with AWS, Kafka, Cloudflare Workers, Python, Click House, Redis, and other modern technologies to enable seamless data ingestion, transformation, merging, and storage. Bonus: If you have Web Data Analytics or Programmatic Advertising knowledge, it will be a big plus!. Responsibilities Real-Time Data Processing & Transformation : Build low-latency, high-throughput real-time pipelines using Kafka, Redis, Firehose, Lambda, and Cloudflare Workers. Perform real-time data transformations like filtering, aggregation, enrichment, and deduplication using Kafka Streams, Redis Streams, or AWS Lambda. Merge data from multiple real-time sources into a single structured dataset for analytics. Batch Data Processing & Transformation Develop batch ETL/ELT pipelines for processing large-scale structured and unstructured data. Perform data transformations, joins, and merging across different sources in Click House, AWS Glue, or Python. Optimize data ingestion, transformation, and storage workflows for efficiency and reliability. Data Pipeline Development & Optimization Design, develop, and maintain scalable, fault-tolerant data pipelines for real-time & batch processing. Optimize data workflows to reduce latency, cost, and compute load. Data Integration & Merging Combine real-time and batch data streams for unified analytics. Integrate data from various sources (APIs, databases, event streams, cloud storage). Cloud Infrastructure & Storage Work with AWS services (S3, EC2, ECS, Lambda, Firehose, RDS, Redshift, ClickHouse) for scalable data processing. Implement data lake and warehouse solutions using S3, Redshift, and ClickHouse. Data Visualization & Reporting Work with Power BI, Tableau, or Grafana to create real-time dashboards and analytical reports. Web Data Analytics & Programmatic Advertising (Big Plus!) : Experience working with web tracking data, user behavior analytics, and digital marketing datasets. Knowledge of programmatic advertising, ad impressions, clickstream data, and real-time bidding (RTB) analytics. Monitoring & Performance Optimization Implement monitoring & logging of data pipelines using AWS CloudWatch, Prometheus, and Grafana. Tune Kafka, Click House, and Redis for high performance. Collaboration & Best Practices Work closely with data analysts, software engineers, and DevOps teams to enhance data accessibility. Follow best practices for data governance, security, and compliance. Must-Have Skills Programming : Strong experience in Python and JavaScript. Real-time Data Processing & Merging : Expertise in Kafka, Redis, Cloudflare Workers, Firehose, Lambda. Batch Processing & Transformation : Experience with Click House, Python, AWS Glue, SQL-based transformations. Data Storage & Integration : Experience with MySQL, Click House, Redshift, and S3-based storage. Cloud Technologies : Hands-on with AWS (S3, EC2, ECS, RDS, Firehose, Click House, Lambda, Redshift). Visualization & Reporting : Knowledge of Power BI, Tableau, or Grafana. CI/CD & Infrastructure as Code (IaC) : Familiarity with Terraform, CloudFormation, Git, Docker, and Kubernetes. (ref:hirist.tech) Show more Show less
Posted 13 hours ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Purpose of Role Chubb is seeking a highly skilled and experienced Deep Learning Engineer with Generative AI experience to develop and scale our Generative AI capabilities. The ideal candidate will be responsible for designing, finetuning and training large language models and developing Generative AI systems that can create and improve conversational abilities and decision-making skills of our machines. Key Accountabilities & Responsibilities Develop and improve Generative AI systems to enable high quality decision making, to refine answers for queries, and to enhance automated communication capabilities. Own the entire process of data collection, training, and deploying machine learning models. Continuous research and implementation of cutting-edge techniques in deep learning, NLP and Generative AI to build state-of-the-art models. Work closely with Data Scientists and other Machine Learning Engineers to design and implement end-to-end solutions. Optimize and streamline deep learning training pipelines. Develop performance metrics to track the efficiency and accuracy of deep learning models. Skills & Experience Minimum of 4 years of industry experience in developing deep learning models with a focus on NLP and Generative AI. Expertise in deep learning frameworks such as Tensorflow, PyTorch and Keras. Experience working with cloud-based services such as Azure for training and deployment of deep learning models. Experience with Hugging Face's Transformer libraries. Expertise in developing and scaling Generative AI systems. Experience in large dataset processing, including pre-processing, cleaning and normalization. Proficient in programming languages such as Python ( preferred ) /R. Experience with natural language processing (NLP) techniques and libraries Excellent analytical and problem-solving skills. Show more Show less
Posted 23 hours ago
0 years
6 - 9 Lacs
Noida
On-site
Req ID: 299670 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Systems Integration Analyst to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Position General Duties and Tasks: Participate in research, design, implementation, and optimization of Machine learning Models Help AI product managers and business stakeholders understand the potential and limitations of AI when planning new products Understanding of Revenue Cycle Management processes like Claims filing and adjudication Hands on experience in Python Build data ingest and data transformation platform Identify transfer learning opportunities and new training datasets Build AI models from scratch and help product managers and stakeholders understand results Analysing the ML algorithms that could be used to solve a given problem and ranking them by their success probability Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Defining validation strategies Defining the pre-processing or feature engineering to be done on a given dataset Training models and tuning their hyperparameters Analysing the errors of the model and designing strategies to overcome them Deploying models to production Create APIs and help business customers put results of your AI models into operations JD Education Bachelor's in computer sciences or similar. Masters preferred. Skills hands on programming experience working on enterprise products Demonstrated proficiency in multiple programming languages with a strong foundation in a statistical platform such as Python, R, SAS, or MatLab. Knowledge in Deep Learning/Machine learning, Artificial Intelligence Experience in building AI models using algorithms of Classification & Clustering techniques Expertise in visualizing and manipulating big datasets Strong in MS SQL Acumen to take a complex problem and break it down to workable pieces, to code a solution Excellent verbal and written communication skills Ability to work in and define a fast pace and team focused environment Proven record of delivering and completing assigned projects and initiatives Ability to deploy large scale solutions to an enterprise estate Strong interpersonal skills Understanding of Revenue Cycle Management processes like Claims filing and adjudication is a plus About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA is an equal opportunity employer and considers all applicants without regarding to race, color, religion, citizenship, national origin, ancestry, age, sex, sexual orientation, gender identity, genetic information, physical or mental disability, veteran or marital status, or any other characteristic protected by law. We are committed to creating a diverse and inclusive environment for all employees. If you need assistance or an accommodation due to a disability, please inform your recruiter so that we may connect you with the appropriate team.
Posted 1 day ago
5.0 years
0 Lacs
India
On-site
This posting is for one of our International Clients. About the Role We’re creating a new certification: Inside Gemini: Gen AI Multimodal and Google Intelligence (Google DeepMind) . This course is designed for technical learners who want to understand and apply the capabilities of Google’s Gemini models and DeepMind technologies to build powerful, multimodal AI applications. We’re looking for a Subject Matter Expert (SME) who can help shape this course from the ground up. You’ll work closely with a team of learning experience designers, writers, and other collaborators to ensure the course is technically accurate, industry-relevant, and instructionally sound. Responsibilities As the SME, you’ll partner with learning experience designers and content developers to: Translate real-world Gemini and DeepMind applications into accessible, hands-on learning for technical professionals. Guide the creation of labs and projects that allow learners to build pipelines for image-text fusion, deploy Gemini APIs, and experiment with DeepMind’s reinforcement learning libraries. Contribute technical depth across activities, from high-level course structure down to example code, diagrams, voiceover scripts, and data pipelines. Ensure all content reflects current, accurate usage of Google’s multimodal tools and services. Be available during U.S. business hours to support project milestones, reviews, and content feedback. This role is an excellent fit for professionals with deep experience in AI/ML, Google Cloud, and a strong familiarity with multimodal systems and the DeepMind ecosystem. Essential Tools & Platforms A successful SME in this role will demonstrate fluency and hands-on experience with the following: Google Cloud Platform (GCP) Vertex AI (particularly Gemini integration, model tuning, and multimodal deployment) Cloud Functions, Cloud Run (for inference endpoints) BigQuery and Cloud Storage (for handling large image-text datasets) AI Platform Notebooks or Colab Pro Google DeepMind Technologies JAX and Haiku (for neural network modeling and research-grade experimentation) DeepMind Control Suite or DeepMind Lab (for reinforcement learning demonstrations) RLax or TF-Agents (for building and modifying RL pipelines) AI/ML & Multimodal Tooling Gemini APIs and SDKs (image-text fusion, prompt engineering, output formatting) TensorFlow 2.x and PyTorch (for model interoperability) Label Studio, Cloud Vision API (for annotation and image-text preprocessing) Data Science & MLOps DVC or MLflow (for dataset and model versioning) Apache Beam or Dataflow (for processing multimodal input streams) TensorBoard or Weights & Biases (for visualization) Content Authoring & Collaboration GitHub or Cloud Source Repositories Google Docs, Sheets, Slides Screen recording tools like Loom or OBS Studio Required skills and experience: Demonstrated hands-on experience building, deploying, and maintaining sophisticated AI powered applications using Gemini APIs/SDKs within the Google Cloud ecosystem, especially in Firebase Studio and VS Code. Proficiency in designing and implementing agent-like application patterns, including multi-turn conversational flows, state management, and complex prompting strategies (e.g., Chain-of Thought, few-shot, zero-shot). Experience integrating Gemini with Google Cloud services (Firestore, Cloud Functions, App Hosting) and external APIs for robust, production-ready solutions. Proven ability to engineer applications that process, integrate, and generate content across multiple modalities (text, images, audio, video, code) using Gemini’s native multimodal capabilities. Skilled in building and orchestrating pipelines for multimodal data handling, synchronization, and complex interaction patterns within application logic. Experience designing and implementing production-grade RAG systems, including integration with vector databases (e.g., Pinecone, ChromaDB) and engineering data pipelines for indexing and retrieval. Ability to manage agent state, memory, and persistence for multi-turn and long-running interactions. Proficiency leveraging AI-assisted coding features in Firebase Studio (chat, inline code, command execution) and using App Prototyping agents or frameworks like Genkit for rapid prototyping and structuring agentic logic. Strong command of modern development workflows, including Git/GitHub, code reviews, and collaborative development practices. Experience designing scalable, fault-tolerant deployment architectures for multimodal and agentic AI applications using Firebase App Hosting, Cloud Run, or similar serverless/cloud platforms. Advanced MLOps skills, including monitoring, logging, alerting, and versioning for generative AI systems and agents. Deep understanding of security best practices: prompt injection mitigation (across modalities), secure API key management, authentication/authorization, and data privacy. Demonstrated ability to engineer for responsible AI, including bias detection, fairness, transparency, and implementation of safety mechanisms in agentic and multimodal applications. Experience addressing ethical challenges in the deployment and operation of advanced AI systems. Proven success designing, reviewing, and delivering advanced, project-based curriculum and hands-on labs for experienced software developers and engineers. Ability to translate complex engineering concepts (RAG, multimodal integration, agentic patterns, MLOps, security, responsible AI) into clear, actionable learning materials and real world projects. 5+ years of professional experience in AI-powered application development, with a focus on generative and multimodal AI. Strong programming skills in Python and JavaScript/TypeScript; experience with modern frameworks and cloud-native development. Bachelor’s or Master’s degree in Computer Science, Data Engineering, AI, or a related technical field. Ability to explain advanced technical concepts (e.g., fusion transformers, multimodal embeddings, RAG workflows) to learners in an accessible way. Strong programming experience in Python and experience deploying machine learning pipelines Ability to work independently, take ownership of deliverables, and collaborate closely with designers and project managers Preferred: Experience with Google DeepMind tools (JAX, Haiku, RLax, DeepMind Control Suite/Lab) and reinforcement learning pipelines. Familiarity with open data formats (Delta, Parquet, Iceberg) and scalable data engineering practices. Prior contributions to open-source AI projects or technical community engagement. Show more Show less
Posted 1 day ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Dear Candidate, Greetings from LTIMindtree !!! Your Profile got shortlisted for Technical Round of Interview. I hope you have a great day, Skills - Data Analyst Location – Hyderabad , Pune, Mumbai, Kolkata, Bangalore, Chennai Notice : Immediate to 15 days PFB JD FYR. 5 to 8 years experience in information technology Business Analysis Data Coverage analysis and Identify data gaps understanding of product and channel hierarchies data transformation and aggregations Strong functional and technical knowledge on Retail Industry SalesOnlineOffline CRM Good understanding of ETL SQl Server and BI tools An ability to align influence stakeholders and build working relationships A confident and articulate communicator capable of inspiring strong collaboration Good knowledge of IT systems and processes Strong analytical problem solving and project management skills Attention to detail and complex processes Business engagement and stakeholder management Partner with business team to identify needs and analytics opportunities Supervise and guide vendor partners to develop and maintain a data warehouse platform and BI reporting Work with management to prioritize business and information needs Mining data from sources then reorganizing the data in target format Performing data analyses between LOreal database and from business requirements Interpret data analyze results using statistical techniques and provide ongoing reports Find out the mapping and gaps Provide transformation logic Research and verify the logic and relationship between dataset and KPIs Filter and clean data by reviewing reports and performance indicators to locate and correct code problems If Interested , Kindly share your updated resume & fill below link : https://forms.office.com/r/EdFKPCNVaA We shall get back to you soon regarding the further steps. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Req ID: 299670 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Systems Integration Analyst to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Position General Duties and Tasks: Participate in research, design, implementation, and optimization of Machine learning Models Help AI product managers and business stakeholders understand the potential and limitations of AI when planning new products Understanding of Revenue Cycle Management processes like Claims filing and adjudication Hands on experience in Python Build data ingest and data transformation platform Identify transfer learning opportunities and new training datasets Build AI models from scratch and help product managers and stakeholders understand results Analysing the ML algorithms that could be used to solve a given problem and ranking them by their success probability Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Defining validation strategies Defining the pre-processing or feature engineering to be done on a given dataset Training models and tuning their hyperparameters Analysing the errors of the model and designing strategies to overcome them Deploying models to production Create APIs and help business customers put results of your AI models into operations JD Education Bachelor’s in computer sciences or similar. Masters preferred. Skills hands on programming experience working on enterprise products Demonstrated proficiency in multiple programming languages with a strong foundation in a statistical platform such as Python, R, SAS, or MatLab. Knowledge in Deep Learning/Machine learning, Artificial Intelligence Experience in building AI models using algorithms of Classification & Clustering techniques Expertise in visualizing and manipulating big datasets Strong in MS SQL Acumen to take a complex problem and break it down to workable pieces, to code a solution Excellent verbal and written communication skills Ability to work in and define a fast pace and team focused environment Proven record of delivering and completing assigned projects and initiatives Ability to deploy large scale solutions to an enterprise estate Strong interpersonal skills Understanding of Revenue Cycle Management processes like Claims filing and adjudication is a plus About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA is an equal opportunity employer and considers all applicants without regarding to race, color, religion, citizenship, national origin, ancestry, age, sex, sexual orientation, gender identity, genetic information, physical or mental disability, veteran or marital status, or any other characteristic protected by law. We are committed to creating a diverse and inclusive environment for all employees. If you need assistance or an accommodation due to a disability, please inform your recruiter so that we may connect you with the appropriate team. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position General Duties and Tasks: Participate in research, design, implementation, and optimization of Machine learning Models Help AI product managers and business stakeholders understand the potential and limitations of AI when planning new products Understanding of Revenue Cycle Management processes like Claims filing and adjudication Hands on experience in Python Build data ingest and data transformation platform Identify transfer learning opportunities and new training datasets Build AI models from scratch and help product managers and stakeholders understand results Analysing the ML algorithms that could be used to solve a given problem and ranking them by their success probability Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Defining validation strategies Defining the pre-processing or feature engineering to be done on a given dataset Training models and tuning their hyperparameters Analysing the errors of the model and designing strategies to overcome them Deploying models to production Create APIs and help business customers put results of your AI models into operations JD Education Bachelor’s in computer sciences or similar. Skills hands on programming experience working on enterprise products Demonstrated proficiency in multiple programming languages with a strong foundation in a statistical platform such as Python, R, SAS, or MatLab. Knowledge in Deep Learning/Machine learning, Artificial Intelligence Experience in building AI models using algorithms of Classification & Clustering techniques Expertise in visualizing and manipulating big datasets Strong in MS SQL Acumen to take a complex problem and break it down to workable pieces, to code a solution Excellent verbal and written communication skills Ability to work in and define a fast pace and team focused environment Proven record of delivering and completing assigned projects and initiatives Ability to deploy large scale solutions to an enterprise estate Strong interpersonal skills Understanding of Revenue Cycle Management processes like Claims filing and adjudication is a plus Show more Show less
Posted 1 day ago
1.0 - 5.0 years
0 - 0 Lacs
Shāhdara
On-site
Job Title: Artificial Intelligence Engineer Company: Humanoid Maker Location: Delhi Type: Full-Time Experience: 1–5 years Industry: Artificial Intelligence / Software Development / Robotics About Us: Humanoid Maker is a fast-growing innovator in AI, robotics, and automation solutions. We specialize in AI-powered software, robotic kits, refurbished IT hardware, and technology services that empower startups, businesses, and institutions across India. Our mission is to build intelligent systems that simplify lives and boost productivity. Role Overview: We are looking for a skilled and creative Artificial Intelligence Engineer to join our AI development team. This role involves building, training, and deploying machine learning models for various use cases including voice synthesis, natural language processing, computer vision, and robotics integration. Key Responsibilities: Develop, train, and fine-tune machine learning and deep learning models for real-world applications. Work on projects related to NLP, speech recognition, voice cloning, and robotic intelligence. Build APIs and tools to integrate AI features into applications and hardware systems. Optimize models for performance and deploy them in edge, desktop, or server environments. Collaborate with UI/UX and backend developers for full-stack AI system integration. Conduct data preprocessing, feature engineering, and dataset management. Continuously research and experiment with the latest AI advancements. Key Skills & Requirements: Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or a related field. 1–5 years of hands-on experience in developing AI/ML models. Strong knowledge of Python, PyTorch, TensorFlow, scikit-learn, etc. Experience with NLP libraries (e.g., Hugging Face Transformers, spaCy), and/or CV frameworks (OpenCV, YOLO). Familiarity with APIs, web frameworks (Flask/FastAPI), and databases (SQL or NoSQL). Bonus: Experience in robotics integration or embedded AI (Arduino/Raspberry Pi). What We Offer: Competitive salary based on experience. Opportunity to work on cutting-edge AI and robotics projects. Friendly and innovative team environment. Career growth in a rapidly expanding AI company. Access to high-performance computing tools and training resources. Job Types: Full-time, Permanent Pay: ₹10,000.00 - ₹50,000.00 per month Benefits: Paid sick time Supplemental Pay: Commission pay Overtime pay Performance bonus Yearly bonus Education: Bachelor's (Required) Experience: AI: 1 year (Required) Language: Hindi (Required) Work Location: In person
Posted 2 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About G2 - The Company When you join G2, you’re joining the team that helps businesses reach their peak potential by powering decisions and strategies with trusted insights from real software users. G2 is the world's largest and most trusted software marketplace. More than 100 million people annually — including employees at all Fortune 500 companies — use G2 to make smarter software decisions based on authentic peer reviews. Thousands of software and services companies of all sizes partner with G2 to build their reputation and grow their business — including Salesforce, HubSpot, Zoom, and Adobe. To learn more about where you go for software, visit www.g2.com and follow us on LinkedIn. As we continue on our growth journey, we are striving to be the most trusted data source in the age of AI for informing software buying decisions and go-to-market strategies. Does that sound exciting to you? Come join us as we try to reach our next PEAK! About G2 - Our People At G2, we have big goals, but we stay grounded in our PEAK ( P erformance + E ntrepreneurship + A uthenticity + K indness) values. You’ll be part of a value-driven, growing global community that climbs PEAKs together. We cheer for each other’s successes, learn from our mistakes, and support and lean on one another during challenging times. With ambition and entrepreneurial spirit we push each other to take on challenging work, which will help us all to grow and learn. You will be part of a global, diverse team of smart, dedicated, and kind individuals - each with unique talents, aspirations, and life experiences. At the heart of our community and culture are our people-led ERGs, which celebrate and highlight the diverse identities of our global team. As an organization, we are intentional about our DEI and philanthropic work (like our G2 Gives program) because it encourages us all to be better people. About The Role G2 is looking for a Software Engineer to join our growing team! You will be responsible for helping develop solutions with a strong emphasis on code design and quality. We enjoy quarterly weeks of creativity where engineers work to solve problems they see our customers have. If you wish to join a talented passionate team whose kindness and authenticity will help you grow then apply so we can start our conversation today! This position is based in Bengaluru and requires in-office attendance with a 5-day workweek. In This Role, You Will Report to Engineering Manager dedicated to the delivery team Develop a high-quality, stable, and well-tested web application Apply database skills against a large and growing dataset Create and improve full features in short development cycles, including effective frontend and backend code Work in close coordination with designers, product managers, and business stakeholders Track metrics and measurements alongside core features to help make informed decisions Balance development with collaborative meetings Use patterns of code decomposition to break down tasks into deliverable solutions Ensure quality releases by writing tests covering unit, integration and functional requirements Minimum Qualifications 3+ years of professional programming experience, ideally in a web application environment Proficient in Ruby and Ruby on Rails, with working knowledge of JavaScript. Experience building and shipping products, not just as a hands-on implementor but as a collaborator who contributes ideas and helps shape the roadmap Comfort with evaluating and integrating AI into workflows, including understanding where AI adds value—and where it doesn’t Familiarity with high-performing, agile development teams and best practices like CI/CD, code reviews, and feature flags Strong opinions on software architecture and development practices, grounded in real-world experience building and maintaining production systems What Can Help Your Application Stand Out Exposure to building AI-first features (e.g., workflow automation, generative AI, intelligent UIs) Prior programming experience in a web environment Degree in Computer Science or a completed Bootcamp Git based version control Database skills such as SQL within Postgresql Experience working within a design system to ensure visual and interaction consistency. Hotwire and Tailwind CSS experience is a bonus Our Commitment to Inclusivity and Diversity At G2, we are committed to creating an inclusive and diverse environment where people of every background can thrive and feel welcome. We consider applicants without regard to race, color, creed, religion, national origin, genetic information, gender identity or expression, sexual orientation, pregnancy, age, or marital, veteran, or physical or mental disability status. Learn more about our commitments here. -- For job applicants in California, the United Kingdom, and the European Union, please review this applicant privacy notice before applying to this job. Show more Show less
Posted 2 days ago
0.0 - 1.0 years
0 Lacs
Shahdara, Delhi, Delhi
On-site
Job Title: Artificial Intelligence Engineer Company: Humanoid Maker Location: Delhi Type: Full-Time Experience: 1–5 years Industry: Artificial Intelligence / Software Development / Robotics About Us: Humanoid Maker is a fast-growing innovator in AI, robotics, and automation solutions. We specialize in AI-powered software, robotic kits, refurbished IT hardware, and technology services that empower startups, businesses, and institutions across India. Our mission is to build intelligent systems that simplify lives and boost productivity. Role Overview: We are looking for a skilled and creative Artificial Intelligence Engineer to join our AI development team. This role involves building, training, and deploying machine learning models for various use cases including voice synthesis, natural language processing, computer vision, and robotics integration. Key Responsibilities: Develop, train, and fine-tune machine learning and deep learning models for real-world applications. Work on projects related to NLP, speech recognition, voice cloning, and robotic intelligence. Build APIs and tools to integrate AI features into applications and hardware systems. Optimize models for performance and deploy them in edge, desktop, or server environments. Collaborate with UI/UX and backend developers for full-stack AI system integration. Conduct data preprocessing, feature engineering, and dataset management. Continuously research and experiment with the latest AI advancements. Key Skills & Requirements: Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or a related field. 1–5 years of hands-on experience in developing AI/ML models. Strong knowledge of Python, PyTorch, TensorFlow, scikit-learn, etc. Experience with NLP libraries (e.g., Hugging Face Transformers, spaCy), and/or CV frameworks (OpenCV, YOLO). Familiarity with APIs, web frameworks (Flask/FastAPI), and databases (SQL or NoSQL). Bonus: Experience in robotics integration or embedded AI (Arduino/Raspberry Pi). What We Offer: Competitive salary based on experience. Opportunity to work on cutting-edge AI and robotics projects. Friendly and innovative team environment. Career growth in a rapidly expanding AI company. Access to high-performance computing tools and training resources. Job Types: Full-time, Permanent Pay: ₹10,000.00 - ₹50,000.00 per month Benefits: Paid sick time Supplemental Pay: Commission pay Overtime pay Performance bonus Yearly bonus Education: Bachelor's (Required) Experience: AI: 1 year (Required) Language: Hindi (Required) Work Location: In person
Posted 2 days ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Advisor Application Support What a Successful File Monitoring Automation Using Python Involves Design and Development Automation Script: Design and maintain advanced Python scripts to deliver comprehensive insights into File Transmission component and its various Life Cycle. Performance Optimization: Improve efficiency when handling large datasets using techniques such as optimized large data manipulation, and RDBMS data models. Advanced Regex Utilization: Apply sophisticated regular expressions to create accurate field extraction and mapping to the large dataset. File Transmission Monitoring Automation: Track and report on each stage of file transmission, continuously refining monitoring strategies for enhanced reliability and visibility. Cross-Functional Collaboration: Work closely with various teams to integrate Python script with broader IT systems and workflows. Develop and maintain automation scripts using Python for testing, data validation, and system operations. Design and implement automation frameworks. Automate File Transmission applications using Python and Selenium. Maintain automated workflows and troubleshooting issues in context of File Transmissions. Write reusable, scalable, and maintainable code with proper documentation. What You Will Need To Have Education: Bachelor’s and/or Master’s degree in Information Technology, Computer Science, or a related field. Experience: Minimum of 10 years in IT, with a focus on Python, SFTP tools, data integration, or technical support roles. Proficiency in Python programming. Experience with Selenium for automation. Familiarity with test automation frameworks like PyTest or Robot Framework. Understanding of REST APIs and tools like Postman or Python requests. Basic knowledge of Linux/Unix environments and shell scripting. Database Skills: Experience with relational databases and writing complex SQL queries with advanced joins. File Transmission Tools: Hands-on experience with platforms like Sterling File Gateway, IBM Sterling, or other MFT solutions. Analytical Thinking: Proven problem-solving skills and the ability to troubleshoot technical issues effectively. Communication: Strong verbal and written communication skills for collaboration with internal and external stakeholders. What Would Be Great To Have (Optional) Tool Experience: Familiarity with tools such as Splunk, Dynatrace, Sterling File Gateway, File Transfer tool. Linux: Working knowledge of Linux and command-line operations. Secure File Transfer Protocols: Hands-on experience with SFTP and tools like SFG, NDM, and MFT using SSH encryption. Task Scheduling Tools: Experience with job scheduling platforms such as AutoSys, Control-M, or cron. Thank You For Considering Employment With Fiserv. Please Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our Commitment To Diversity And Inclusion Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note To Agencies Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning About Fake Job Posts Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address. Show more Show less
Posted 2 days ago
12.0 years
0 Lacs
Nandigama, Telangana, India
On-site
At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at https://www.jnj.com Job Function Data Analytics & Computational Sciences Job Sub Function Biostatistics Job Category Scientific/Technology All Job Posting Locations: Bangalore, Karnataka, India, Mumbai, India, PENJERLA, Telangana, India Job Description Position Summary: The Principal Programming Lead is a highly skilled Programmer with expert knowledge of programming languages, tools, and complex data structures, industry standards. The position requires proven technical and analytic abilities and strong capabilities in leading activities and programming teams in accordance with departmental processes and procedures. As a highly experienced Principal Programming Lead, they apply expert technical, scientific, problem-solving skills providing innovative and forward-thinking solutions to ensure operational efficiency across assigned projects providing training, coaching, mentoring to other programmers. The Principal Programming Lead position is accountable for the planning, oversight, and delivery of programming activities in support of one or more clinical projects, compounds, or submissions of high complexity and criticality. In this role, the Principal Programming Lead is responsible for making decisions and recommendations that impact the efficiency, timeliness, and quality of deliverables with a high degree of autonomy and provide leadership, direction and technical and project specific guidance to programming teams. In addition, this position may lead and contribute expert knowledge and technical skills to assigned delivery unit, departmental innovation, and process improvement projects. Principal Responsibilities Designs and develops efficient programs and technical solutions in support of highly complex/critical clinical research analysis and reporting activities, including urgent/on-demand analysis requests. Provides technical and project specific guidance to programming team members to ensure high quality and on-time deliverables in compliance with departmental processes. Coordinates and oversees programming team activities and may provide matrix leadership to one or more programming teams as needed. Shares knowledge and provides guidance and coaching to programmers in developing advanced technical and analytical abilities. Performs comprehensive review of, and provides input into, project requirements and documentation. Collaborates effectively with programming and cross-functional team members and counterparts to achieve project goals and independently manages escalations. As applicable, oversees programming activities outsourced to third party vendors adopting appropriate processes and best practices to ensure their performance meets the agreed upon scope, timelines, and quality. Responsible for adoption of new processes & technology on assigned projects/programs in collaboration with departmental technical groups and programming portfolio leads. Contributes to and may lead departmental innovation and process improvement projects and may contribute programming expertise to cross functional projects/initiatives. May play the role of a Delivery Unit/Disease Area Expert. Ensures continued compliance of project/programs and required company and departmental training, time reporting, and other business/operational processes as required for position. Clinical Programming Oversees the design, development, validation, management, and maintenance of clinical databases according to established standards. Responsible for implementation of data tabulation standards. Performs data cleaning by programming edit checks and data review listings and Data reporting by creating data visualizations and listings for medical monitoring and central monitoring. Statistical Programming Responsible for implementation of data and analysis standards ensuring consistency in analysis dataset design across trials within a program. Principal Relationships The Principal Programming Lead reports into a people manager position within the Delivery unit and is accountable to the Portfolio Lead for assigned activities and responsibilities. Functional contacts within IDAR include but are not limited to: Leaders and leads in Data Management and Central Monitoring, Programming Leads, Clinical Data Standards, Regulatory Medical Writing Leads, and system support organizations. Functional Contacts within J&J Innovative Medicine (as collaborator or peer) include but are not limited to: Statistics, Clinical, Global Medical Safety, Project Management, Procurement, Finance, Legal, Global Privacy, Regulatory, Strategic Partnerships, Human Resources. External contacts include but are not limited to external partners, CRO management and vendor liaisons, industry peers and working groups. Education And Experience Requirements Bachelor's degree (e.g., BS, BA) or equivalent professional experience is required, preferably in Computer Sciences, Mathematics, Data Science/Engineering, Public Health, or another relevant scientific field (or equivalent theoretical/technical depth). Advanced degrees preferred (e.g., Master, PhD). Experience And Skills Required Approx. 12+ years of experience in Pharmaceutical, CRO or Biotech industry or related field or industry. In-depth knowledge of programming practices (including tools and processes). Working knowledge of relevant regulatory guidelines (e.g., ICH-GCP, 21 CFR Part 11) Project, risk, and team management and an established track record leading teams to successful outcomes. Excellent planning and coordination of project delivery. Established track record collaborating with multi-functional teams in a matrix environment and partnering with/managing stakeholders, customers, and vendors. Excellent communication, leadership, influencing and decision-making skills, and demonstrated ability to foster team productivity and cohesiveness adapting to rapidly changing organizations and business environments. Experience managing the outsourcing or externalization of programming activities in the clinical trials setting (e.g., Working with CROs, academic institutions) preferred experience. Demonstrated experience managing the outsourcing or externalization of clinical programming activities in the clinical trials setting (e.g., working with CROs, academic institutions) is preferred. Expert CDISC Standards knowledge. Expert knowledge of relevant programming languages for data manipulation and reporting. May include SAS, R, Python, etc. Knowledge of SAS is required for a Clinical Programming role. Excellent written and verbal communications and influencing and negotiation skills. Advanced knowledge of programming and industry standard data structures, thorough understanding of end-to-end clinical trial process and relevant clinical research concepts. Other Innovative thinking allows for optimal design and execution of programming development strategies. Development and implementation of a business change/innovative way of working. Show more Show less
Posted 3 days ago
15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
This job is with Johnson & Johnson, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at https://www.jnj.com Job Function Data Analytics & Computational Sciences Job Sub Function Biostatistics Job Category People Leader All Job Posting Locations: Bangalore, Karnataka, India, Hyderabad, Andhra Pradesh, India Job Description Integrated Data Analytics and Reporting (IDAR) Associate Director Portfolio Lead Clinical Programming* (*Title may vary based on Region or Country requirements) Position Summary The Associate Director Portfolio Lead Clinical Programming is a highly experienced individual with expert understanding of programming strategies, practices, methods, processes, technologies, industry standards, complex data structures, and analysis and reporting solutions. This position requires strong project and people leadership skills with the capability to effectively coordinate and oversee programming activities across teams in accordance with company and departmental processes and procedures. As a portfolio leader, this position is responsible for formulating the Programming strategy across a large portfolio of one or more programs within a Disease area and/or Delivery Unit, with accountability for operational oversight and effective planning and execution of programming activities for their assigned portfolio. This position interfaces with program level Delivery Unit Leaders to provide regular status updates, identify and manage risks and issues, and ensures the appropriate use of escalation pathways to appropriate functional leaders as needed. This position provides functional area people and/or matrix leadership to departmental staff. The role is responsible for the recruitment, onboarding, performance management and development of people and future skills and technical knowledge expertise within their reporting line while building an inclusive and diverse working environment. The Associate Director Portfolio Lead Clinical Programming may also take on responsibilities of second line management (i.e. manager of managers). The Associate Director Portfolio Lead Clinical Programming role plays a critical role in the growth and development of C&SP and contributes to organizational effectiveness, transparency, and communication. Directly contributes to delivery of the J&J IM R&D portfolio through effective leadership and accountability of large or complex clinical development and strategic innovation of programs and projects. In collaboration with Senior departmental leadership, the Senior Manager Portfolio Lead influences departmental effectiveness acting as a change agent to shape, drive and implement the departmental strategic vision. This position develops strong and productive working relationships with key stakeholders within IDAR in addition to broader partners, external suppliers and/or industry groups. Principal Responsibilities As Project Leader: Drives the strategy and planning, execution, and completion of all programming activities and deliverables within assigned scope ensuring quality, compliance standards, consistency, and efficiency. Proactively evaluates and manage resource demand, allocation, utilization, and delivery to meet current and future business needs. Ensure timely and effective maintenance of functional planning systems. May include forecasting related to potential in-licensing and acquisitions. Independently and effectively manages issue escalations, adopting appropriate escalation pathways. Collaborates with cross-functional and external partners on programming related deliverables for co-development programs and defining data integration strategy of the assigned programs/projects. Ensures training compliance and development of appropriate job skills for assigned personnel. Contributes to the development of functional vendor contracts and oversees of delivery in line with agreed milestones and scope of work, R&D business planning and budget estimates. Serves as the primary point of contact for sourcing providers and is responsible for establishing a strategic partnership. Drives the enhancement of functional, technical and/or scientific capabilities within C&SP and shares best practices. Leads programming related aspects of regulatory agency inspections and J&J internal audits ensuring real time inspection readiness for all programming deliverables. Provides input to submission strategy to regulatory agencies and ensures all programming deliverables are complete and compliant. As People Leader Responsible for attracting and retaining top talent, proactively managing performance, and actively supporting talent development and succession planning. Ensures organizational effectiveness, transparency, and communication. Provides mentorship and coaching to programming team members. Ensures training compliance and development of appropriate job skills for assigned personnel. Oversees their work allocation, providing coaching and guidance as necessary. Responsible for local administration and decision making associated with the management of assigned personnel. As Matrix Leader Accountable for actively identifying opportunities, evaluating, and driving solutions to enhance efficiency and knowledge-sharing across programs, value streams and the department. Serves as departmental resource in areas of process and technical expertise. Stays current with industry trends and policies related to Programming. Leads departmental innovation and process improvement projects and as required, may contribute programming expertise to cross functional projects/initiatives. Provides strategic direction within Delivery Unit initiatives and projects. Serves as a programming expert and influencer on internal and external (industry) work groups. Clinical Programming Leader Oversees the design, development, validation, management, and maintenance of clinical databases according to established standards. Statistical Programming Leader Responsible for implementation of data and analysis standards ensuring consistency in analysis dataset design across trials within a program. Principal Relationships This role reports into a people manager position within the Delivery unit and is accountable to the Director of Programming for assigned activities and responsibilities. Functional contacts within IDAR include but are not limited to: Leaders and leads in Data Management and Central Monitoring, Programming Leads, Clinical Data Standards, Regulatory Medical Writing Leads, and system support organizations. Functional Contacts within J&J Innovative Medicine (as collaborator or peer) include but are not limited to: Statistics, Clinical, Global Medical Safety, Project Management, Procurement, Finance, Legal, Global Privacy, Regulatory, Strategic Partnerships, Human Resources. External contacts include but are not limited to external partners, CRO management and vendor liaisons, industry peers and working groups. Education And Experience Requirements Bachelor's degree (e.g., BS, BA) or equivalent professional experience is required, preferably in Computer Sciences, Mathematics, Data Science/Engineering, Public Health, or another relevant scientific field (or equivalent theoretical/technical depth). Advanced degrees preferred (e.g., Master, PhD). Experience And Skills Required Approx. 15+ years of experience in Pharmaceutical, CRO or Biotech industry or related field or industry. In-depth knowledge of programming practices (including tools and processes). In depth knowledge of regulatory guidelines (e.g., ICH-GCP). Project, risk, and team management and an established track record leading teams to successful outcomes. Excellent people management skills including staff performance management and people development. Excellent planning and coordinating of deliverables. Established track record collaborating with multi-functional teams in a matrix environment and partnering with/managing stakeholders, customers, and vendors. Excellent communication, leadership, influencing and decision-making skills, and demonstrated ability to foster team productivity and cohesiveness adapting to rapidly changing organizations and business environments. Excellent written and verbal communications skills. Demonstrated experience managing the outsourcing or externalization programming activities in the clinical trials setting (e.g. working with CROs, academic institutions) is preferred. Expert CDISC Standards knowledge. Expert knowledge of data structures and relevant programming languages for data manipulation and reporting. May include SAS, R, Python, etc. Other Innovative thinking to allow for optimal design and execution of clinical and/or statistical development strategies. Development and implementation of a business change/innovative way of working. Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
India
On-site
In accordance with the strategic editorial plan, this position is primarily responsible for maintaining Sage Data and supporting major data project initiatives. This position will work closely with key product stakeholders on the library editorial, product development, publishing technologies, marketing/sales teams. About our Team: The Editorial Processing team at Sage is a dynamic and collaborative group dedicated to curating, maintaining, and enhancing high-quality digital resources for the academic community. We are passionate about data integrity, user experience, and delivering valuable insights through innovative data products like Sage Data. Working closely with stakeholders across editorial, technology, marketing, and product development, our team drives initiatives that ensure our resources meet the evolving needs of researchers, students, and librarians. We combine editorial excellence with technical acumen and project management skills, fostering an environment where detail-oriented, analytical, and creative professionals thrive. Joining our team means becoming part of a mission-driven culture that values precision, innovation, and collaboration, where every voice is heard and every contribution counts toward advancing knowledge and accessibility in the academic world. What is your team’s key role in the business? Our team plays a vital role in ensuring the quality, accuracy, and consistency of published content across all Learning Resource platforms. We act as the bridge between content creation and publication, managing the end-to-end editorial workflow with precision and efficiency. Our team is responsible for reviewing, formatting, and processing submissions to meet editorial standards and publication guidelines. From initial manuscript handling to final approvals, we ensure each piece meets rigorous quality benchmarks. With a strong focus on detail, timeliness, and consistency, the Editorial Processing Team supports the broader mission of delivering trusted, high-quality content to our audience. Our work may be behind the scenes, but it is foundational to the credibility and success of our publications. What other departments do you work closely with? Publishing Technologies / IT – to support content ingestion, interface functionality, and technical documentation. Product Development – to align editorial work with product strategy and feature enhancements. Sales and Marketing – to develop support materials and communicate product value to library customers and end users. Content Teams – to manage the ongoing acquisition, updating, and quality control of datasets. Customer Support / User Services – to ensure a seamless experience for users and address feedback or technical issues related to content. Key Accountabilities The essential job functions include, but are not limited to, the following for Sage data products: With Content team contribute to the content ingestion and update process for Sage data products. Create dataset metadata, ensuring accuracy and timeliness. Perform quality assurance checks on data content and content behavior on the Sage Data interface. Create and maintain technical documentation on the collection and ingest of Sage Data datasets from original sources. Contribute to development and maintenance of editorially created data product end user support materials. Work with the Executive Editor to assist Sales and Marketing in creating necessary support materials. Contribute to decision making about product functionality and content acquisitions. Skills, Qualifications & Experience Any combination equivalent t, but not limited to, the following: At least 3 years of publishing experience, preferably in developing digital resources, for the academic library market OR at least 3 years' experience in technical or digital services for a library, library consortium, archives or museum. Proficient computer and database skills; competency in the Microsoft 365 suite of software. Language skills, reasoning ability and analytical aptitude Exceptional reading and comprehension skills, with an ability to distil and communicate dense information concisely in English. Detail oriented with strong copyediting, proofreading, and quality assurance skills Effective listening, verbal and written communication skills Comfortable with technology Ability to foster effective relationships with marketing, IT, and product stakeholders. Ability to set and follow through on priorities Ability to plan and manage multiple projects and effectively multi-task Ability to effectively manage time to meet deadlines and work professionally under pressure Ability to maintain confidentiality and work with diplomacy Ability to reason and problem solve Proficient analytical and mathematical skills Effective public speaking and/or presenting to individuals and groups Diversity, Equity, and Inclusion At Sage we are committed to building a diverse and inclusive team that is representative of all sections of society and to sustaining a culture that celebrates difference, encourages authenticity, and creates a deep sense of belonging. We welcome applications from all members of society irrespective of age, disability, sex or gender identity, sexual orientation, color, race, nationality, ethnic or national origin, religion or belief as creating value through diversity is what makes us strong.
Posted 3 days ago
3.0 years
5 - 8 Lacs
Gurgaon
Remote
Job description About this role Want to elevate your career by being a part of the world's largest asset manager? Do you thrive in an environment that fosters positive relationships and recognizes stellar service? Are analyzing complex problems and identifying solutions your passion? Look no further. BlackRock is currently seeking a candidate to become part of our Global Investment Operations Data Engineering team. We recognize that strength comes from diversity, and will embrace your rare skills, eagerness, and passion while giving you the opportunity to grow professionally and as an individual. We know you want to feel valued every single day and be recognized for your contribution. At BlackRock, we strive to empower our employees and actively engage your involvement in our success. With over USD $11.5 trillion of assets under management, we have an extraordinary responsibility: our technology and services empower millions of investors to save for retirement, pay for college, buy a home and improve their financial well-being. Come join our team and experience what it feels like to be part of an organization that makes a difference. Technology & Operations Technology & Operations(T&O) is responsible for the firm's worldwide operations across all asset classes and geographies. The operational functions are aligned with clients, products, fund structures and our Third-party provider networks. Within T&O, Global Investment Operations (GIO) is responsible for the development of the firm's operating infrastructure to support BlackRock's investment businesses worldwide. GIO spans Trading & Market Documentation, Transaction Management, Collateral Management & Payments, Asset Servicing including Corporate Actions and Cash & Asset Operations, and Securities Lending Operations. GIO provides operational service to BlackRock's Portfolio Managers and Traders globally as well as industry leading service to our end clients. GIO Engineering Working in close partnership with GIO business users and other technology teams throughout Blackrock, GIO Engineering is responsible for developing and providing data and software solutions that support GIO business processes globally. GIO Engineering solutions combine technology, data, and domain expertise to drive exception-based, function-agnostic, service-orientated workflows, data pipelines, and management dashboards. The Role – GIO Engineering Data Lead Work to date has been focused on building out robust data pipelines and lakes relevant to specific business functions, along with associated pools and Tableau / PowerBI dashboards for internal BlackRock clients. The next stage in the project involves Azure / Snowflake integration and commercializing the offering so BlackRock’s 150+ Aladdin clients can leverage the same curated data products and dashboards that are available internally. The successful candidate will contribute to the technical design and delivery of a curated line of data products, related pipelines, and visualizations in collaboration with SMEs across GIO, Technology and Operations, and the Aladdin business. Responsibilities Specifically, we expect the role to involve the following core responsibilities and would expect a successful candidate to be able to demonstrate the following (not in order of priority) Design, develop and maintain a Data Analytics Infrastructure Work with a project manager or drive the project management of team deliverables Work with subject matter experts and users to understand the business and their requirements. Help determine the optimal dataset and structure to deliver on those user requirements Work within a standard data / technology deployment workflow to ensure that all deliverables and enhancements are provided in a disciplined, repeatable, and robust manner Work with team lead to understand and help prioritize the team’s queue of work Automate periodic (daily/weekly/monthly/Quarterly or other) reporting processes to minimize / eliminate associated developer BAU activities. Leverage industry standard and internal tooling whenever possible in order to reduce the amount of custom code that requires maintenance Experience 3+ years of experience in writing ETL, data curation and analytical jobs using Hadoop-based distributed computing technologies: Spark / PySpark, Hive, etc. 3+ years of knowledge and Experience of working with large enterprise databases preferably Cloud bases data bases/ data warehouses like Snowflake on Azure or AWS set-up Knowledge and Experience in working with Data Science / Machine / Gen AI Learning frameworks in Python, Azure/ openAI, meta tec. Knowledge and Experience building reporting and dashboards using BI Tools: Tableau, MS PowerBI, etc. Prior Experience working on Source Code version Management tools like GITHub etc. Prior experience working with and following Agile-based workflow paths and ticket-based development cycles Prior Experience setting-up infrastructure and working on Big Data analytics Strong analytical skills with the ability to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy Experience working with SMEs / Business Analysts, and working with Stakeholders for sign-off Our benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Job Requisition # R254094
Posted 3 days ago
12.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at https://www.jnj.com Job Function Data Analytics & Computational Sciences Job Sub Function Biostatistics Job Category Scientific/Technology All Job Posting Locations: Bangalore, Karnataka, India, Mumbai, India, PENJERLA, Telangana, India Job Description Position Summary: The Principal Programming Lead is a highly skilled Programmer with expert knowledge of programming languages, tools, and complex data structures, industry standards. The position requires proven technical and analytic abilities and strong capabilities in leading activities and programming teams in accordance with departmental processes and procedures. As a highly experienced Principal Programming Lead, they apply expert technical, scientific, problem-solving skills providing innovative and forward-thinking solutions to ensure operational efficiency across assigned projects providing training, coaching, mentoring to other programmers. The Principal Programming Lead position is accountable for the planning, oversight, and delivery of programming activities in support of one or more clinical projects, compounds, or submissions of high complexity and criticality. In this role, the Principal Programming Lead is responsible for making decisions and recommendations that impact the efficiency, timeliness, and quality of deliverables with a high degree of autonomy and provide leadership, direction and technical and project specific guidance to programming teams. In addition, this position may lead and contribute expert knowledge and technical skills to assigned delivery unit, departmental innovation, and process improvement projects. Principal Responsibilities Designs and develops efficient programs and technical solutions in support of highly complex/critical clinical research analysis and reporting activities, including urgent/on-demand analysis requests. Provides technical and project specific guidance to programming team members to ensure high quality and on-time deliverables in compliance with departmental processes. Coordinates and oversees programming team activities and may provide matrix leadership to one or more programming teams as needed. Shares knowledge and provides guidance and coaching to programmers in developing advanced technical and analytical abilities. Performs comprehensive review of, and provides input into, project requirements and documentation. Collaborates effectively with programming and cross-functional team members and counterparts to achieve project goals and independently manages escalations. As applicable, oversees programming activities outsourced to third party vendors adopting appropriate processes and best practices to ensure their performance meets the agreed upon scope, timelines, and quality. Responsible for adoption of new processes & technology on assigned projects/programs in collaboration with departmental technical groups and programming portfolio leads. Contributes to and may lead departmental innovation and process improvement projects and may contribute programming expertise to cross functional projects/initiatives. May play the role of a Delivery Unit/Disease Area Expert. Ensures continued compliance of project/programs and required company and departmental training, time reporting, and other business/operational processes as required for position. Clinical Programming Oversees the design, development, validation, management, and maintenance of clinical databases according to established standards. Responsible for implementation of data tabulation standards. Performs data cleaning by programming edit checks and data review listings and Data reporting by creating data visualizations and listings for medical monitoring and central monitoring. Statistical Programming Responsible for implementation of data and analysis standards ensuring consistency in analysis dataset design across trials within a program. Principal Relationships The Principal Programming Lead reports into a people manager position within the Delivery unit and is accountable to the Portfolio Lead for assigned activities and responsibilities. Functional contacts within IDAR include but are not limited to: Leaders and leads in Data Management and Central Monitoring, Programming Leads, Clinical Data Standards, Regulatory Medical Writing Leads, and system support organizations. Functional Contacts within J&J Innovative Medicine (as collaborator or peer) include but are not limited to: Statistics, Clinical, Global Medical Safety, Project Management, Procurement, Finance, Legal, Global Privacy, Regulatory, Strategic Partnerships, Human Resources. External contacts include but are not limited to external partners, CRO management and vendor liaisons, industry peers and working groups. Education And Experience Requirements Bachelor's degree (e.g., BS, BA) or equivalent professional experience is required, preferably in Computer Sciences, Mathematics, Data Science/Engineering, Public Health, or another relevant scientific field (or equivalent theoretical/technical depth). Advanced degrees preferred (e.g., Master, PhD). Experience And Skills Required Approx. 12+ years of experience in Pharmaceutical, CRO or Biotech industry or related field or industry. In-depth knowledge of programming practices (including tools and processes). Working knowledge of relevant regulatory guidelines (e.g., ICH-GCP, 21 CFR Part 11) Project, risk, and team management and an established track record leading teams to successful outcomes. Excellent planning and coordination of project delivery. Established track record collaborating with multi-functional teams in a matrix environment and partnering with/managing stakeholders, customers, and vendors. Excellent communication, leadership, influencing and decision-making skills, and demonstrated ability to foster team productivity and cohesiveness adapting to rapidly changing organizations and business environments. Experience managing the outsourcing or externalization of programming activities in the clinical trials setting (e.g., Working with CROs, academic institutions) preferred experience. Demonstrated experience managing the outsourcing or externalization of clinical programming activities in the clinical trials setting (e.g., working with CROs, academic institutions) is preferred. Expert CDISC Standards knowledge. Expert knowledge of relevant programming languages for data manipulation and reporting. May include SAS, R, Python, etc. Knowledge of SAS is required for a Clinical Programming role. Excellent written and verbal communications and influencing and negotiation skills. Advanced knowledge of programming and industry standard data structures, thorough understanding of end-to-end clinical trial process and relevant clinical research concepts. Other Innovative thinking allows for optimal design and execution of programming development strategies. Development and implementation of a business change/innovative way of working. Show more Show less
Posted 3 days ago
5.0 - 7.0 years
0 Lacs
India
On-site
About the role: As a Data Engineer, you will be instrumental in managing our extensive soil carbon dataset and creating robust data systems. You are expected to be involved in the full project lifecycle, from planning and design, through development, and onto maintenance, including pipelines and dashboards. You’ll interact with Product Managers, Project Managers, Business Development and Operations teams to understand business demands and translate them into technical solutions. Your goal is to provide an organisation-wide source of truth for various downstream activities while also working towards improving and modernising our current platform. Key responsibilities: Design, develop, and maintain scalable data pipelines to process soil carbon and agricultural data Create and optimise database schemas and queries Implement data quality controls and validation processes Adapt existing data flows and schemas to new products and services under development Required qualifications: BS/B. Tech in Computer Science or equivalent practical experience, with 5-7 years as a Data Engineer or similar role. Strong SQL skills and experience optimising complex queries Proficiency with relational databases, preferably MySQL Experience building data pipelines, transformations, and dashboards Ability to troubleshoot and fix performance and data issues across the database Experience with AWS services (especially Glue, S3, RDS) Exposure to big data eco-system – Snowflake/Redshift/Tableau/Looker Python programming skills Excellent written and verbal communication skills in English An ideal candidate would also have: High degree of attention to detail to uncover data discrepancies and fix them Familiarity with geospatial data Experience with scientific or environmental datasets Some understanding of the agritech or environmental sustainability sectors Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
About This Role Want to elevate your career by being a part of the world's largest asset manager? Do you thrive in an environment that fosters positive relationships and recognizes stellar service? Are analyzing complex problems and identifying solutions your passion? Look no further. BlackRock is currently seeking a candidate to become part of our Global Investment Operations Data Engineering team. We recognize that strength comes from diversity, and will embrace your rare skills, eagerness, and passion while giving you the opportunity to grow professionally and as an individual. We know you want to feel valued every single day and be recognized for your contribution. At BlackRock, we strive to empower our employees and actively engage your involvement in our success. With over USD $11.5 trillion of assets under management, we have an extraordinary responsibility: our technology and services empower millions of investors to save for retirement, pay for college, buy a home and improve their financial well-being. Come join our team and experience what it feels like to be part of an organization that makes a difference. Technology & Operations Technology & Operations(T&O) is responsible for the firm's worldwide operations across all asset classes and geographies. The operational functions are aligned with clients, products, fund structures and our Third-party provider networks. Within T&O, Global Investment Operations (GIO) is responsible for the development of the firm's operating infrastructure to support BlackRock's investment businesses worldwide. GIO spans Trading & Market Documentation, Transaction Management, Collateral Management & Payments, Asset Servicing including Corporate Actions and Cash & Asset Operations, and Securities Lending Operations. GIO provides operational service to BlackRock's Portfolio Managers and Traders globally as well as industry leading service to our end clients. GIO Engineering Working in close partnership with GIO business users and other technology teams throughout Blackrock, GIO Engineering is responsible for developing and providing data and software solutions that support GIO business processes globally. GIO Engineering solutions combine technology, data, and domain expertise to drive exception-based, function-agnostic, service-orientated workflows, data pipelines, and management dashboards. The Role – GIO Engineering Data Lead Work to date has been focused on building out robust data pipelines and lakes relevant to specific business functions, along with associated pools and Tableau / PowerBI dashboards for internal BlackRock clients. The next stage in the project involves Azure / Snowflake integration and commercializing the offering so BlackRock’s 150+ Aladdin clients can leverage the same curated data products and dashboards that are available internally. The successful candidate will contribute to the technical design and delivery of a curated line of data products, related pipelines, and visualizations in collaboration with SMEs across GIO, Technology and Operations, and the Aladdin business. Responsibilities Specifically, we expect the role to involve the following core responsibilities and would expect a successful candidate to be able to demonstrate the following (not in order of priority) Design, develop and maintain a Data Analytics Infrastructure Work with a project manager or drive the project management of team deliverables Work with subject matter experts and users to understand the business and their requirements. Help determine the optimal dataset and structure to deliver on those user requirements Work within a standard data / technology deployment workflow to ensure that all deliverables and enhancements are provided in a disciplined, repeatable, and robust manner Work with team lead to understand and help prioritize the team’s queue of work Automate periodic (daily/weekly/monthly/Quarterly or other) reporting processes to minimize / eliminate associated developer BAU activities. Leverage industry standard and internal tooling whenever possible in order to reduce the amount of custom code that requires maintenance Experience 3+ years of experience in writing ETL, data curation and analytical jobs using Hadoop-based distributed computing technologies: Spark / PySpark, Hive, etc. 3+ years of knowledge and Experience of working with large enterprise databases preferably Cloud bases data bases/ data warehouses like Snowflake on Azure or AWS set-up Knowledge and Experience in working with Data Science / Machine / Gen AI Learning frameworks in Python, Azure/ openAI, meta tec. Knowledge and Experience building reporting and dashboards using BI Tools: Tableau, MS PowerBI, etc. Prior Experience working on Source Code version Management tools like GITHub etc. Prior experience working with and following Agile-based workflow paths and ticket-based development cycles Prior Experience setting-up infrastructure and working on Big Data analytics Strong analytical skills with the ability to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy Experience working with SMEs / Business Analysts, and working with Stakeholders for sign-off Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less
Posted 3 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position-Azure Data Engineer Location- Pune Mandatory Skills- Azure Databricks, pyspark Experience-5 to 9 Years Notice Period- 0 to 30 days/ Immediately Joiner/ Serving Notice period Must have Experience: Strong design and data solutioning skills PySpark hands-on experience with complex transformations and large dataset handling experience Good command and hands-on experience in Python. Experience working with following concepts, packages, and tools, Object oriented and functional programming NumPy, Pandas, Matplotlib, requests, pytest Jupyter, PyCharm and IDLE Conda and Virtual Environment Working experience must with Hive, HBase or similar Azure Skills Must have working experience in Azure Data Lake, Azure Data Factory, Azure Databricks, Azure SQL Databases Azure DevOps Azure AD Integration, Service Principal, Pass-thru login etc. Networking – vnet, private links, service connections, etc. Integrations – Event grid, Service Bus etc. Database skills Oracle, Postgres, SQL Server – any one database experience Oracle PL/SQL or T-SQL experience Data modelling Thank you Show more Show less
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2