Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Tech Lead, Data Architecture What does a successful Snowflakes Advisor do? We are seeking a highly skilled and experienced Snowflake Advisor to take ownership of our data warehousing strategy, implementation, maintenance and support. In this role, you will design, develop, and lead the adoption of Snowflake-based solutions to ensure scalable, efficient, and secure data systems that empower our business analytics and decision-making processes. As a Snowflake Advisor, you will collaborate with cross-functional teams, lead data initiatives, and act as the subject matter expert for Snowflake across the organization. What You Will Do Define and implement best practices for data modelling, schema design, query optimization in Snowflakes Develop and manage ETL/ELT workflows to ingest, transform and load data into Snowflakes from various resources Integrate data from diverse systems like databases, API`s, flat files, cloud storage etc. into Snowflakes. Using tools like Streamsets, Informatica or dbt to streamline data transformation processes Monitor or tune Snowflake performance including warehouse sizing, query optimizing and storage management. Manage Snowflakes caching, clustering and partitioning to improve efficiency Analyze and resolve query performance bottlenecks Monitor and resolve data quality issues within the warehouse Collaboration with data analysts, data engineers and business users to understand reporting and analytic needs Work closely with DevOps team for Automation, deployment and monitoring Plan and execute strategies for scaling Snowflakes environments as data volume grows Monitor system health and proactively identify and resolve issues Implement automations for regular tasks Enable seamless integration of Snowflakes with BI Tools like Power BI and create Dashboards Support ad hoc query requests while maintaining system performance Creating and maintaining documentation related to data warehouse architecture, data flow, and processes Providing technical support, troubleshooting, and guidance to users accessing the data warehouse Optimize Snowflakes queries and manage Performance Keeping up to date with emerging trends and technologies in data warehousing and data management Good working knowledge of Linux operating system Working experience on GIT and other repository management solutions Good knowledge of monitoring tools like Dynatrace, Splunk Serve as a technical leader for Snowflakes based projects, ensuring alignment with business goals and timelines Provide mentorship and guidance to team members in Snowflakes implementation, performance tuning and data management Collaborate with stakeholders to define and prioritize data warehousing initiatives and roadmaps. Act as point of contact for Snowflakes related queries, issues and initiatives What You Will Need To Have Must have 8 to 10 years of experience in data management tools like Snowflakes, Streamsets, Informatica Should have experience on monitoring tools like Dynatrace, Splunk. Should have experience on Kubernetes cluster management CloudWatch for monitoring and logging and Linux OS experience Ability to track progress against assigned tasks, report status, and proactively identifies issues. Demonstrate the ability to present information effectively in communications with peers and project management team. Highly Organized and works well in a fast paced, fluid and dynamic environment. What Would Be Great To Have Experience in EKS for managing Kubernetes cluster Containerization technologies such as Docker and Podman AWS CLI for command-line interactions CI/CD pipelines using Harness S3 for storage solutions and IAM for access management Banking and Financial Services experience Knowledge of software development Life cycle best practices Thank You For Considering Employment With Fiserv. Please Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our Commitment To Diversity And Inclusion Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note To Agencies Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning About Fake Job Posts Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Delhi, India
On-site
JOB_POSTING-3-71264-3 Job Description Role Title: AVP, Reliability Engineer, EIS(L10) COMPANY OVERVIEW: Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by AmbitionBox Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview Enterprise integration Services team plays pivotal role in connecting different Systems and applications within an organization. This team specializes in designing, implementing, and maintaining integration solutions that enhance business functionality. Synchrony Middleware is critical application for supplying data to different backend, front-end systems & Synchrony applications. Role Summary/Purpose The AVP, Reliability Engineer – Enterprise Integration Services plays a pivotal technical role within Synchrony Financial in successfully providing technical expertise to the EIS Applications & its components that includes Java Spring-Boot, OpenSSL, ITX, MQ. Additional responsibilities include leading the development and the production support of Synchrony’s EIS Services by creating and developing thoughtful solutions to anticipate bugs and maintain operational excellence Key Responsibilities Develop, maintain, and optimize highly reliable software solutions using Java for enterprise applications. Define and implement strategies to improve system reliability, availability, and performance across application infrastructure. Maintains close coordination with developers and Solution Architects to streamline and expedite deployment practices . Continuous seeking the opportunities to enhance product or services through process improvements. Keenly monitors deployment issues to address with immediacy , identify the root causes of failures/issues and developing corrective actions to prevent recurrence. Serves as a Solution Engineer to support non-functional requirements in the development, deployment, and ongoing tuning, as necessary. Troubleshoot and resolve technical issues related to the platform. Create support tickets and work with IBM as needed. Apply and promote patches. Installation, configuration, and administration of Server set-up and management.; Infrastructure and Environment migrations Perform detailed code reviews to ensure quality, performance, and maintainability. Provide on-call support periodically throughout the year to ensure system reliability and incident response. Mentor and influence all levels of the team: in this role, you will have the opportunity to influence up and down the chain of command. Required Skills/Knowledge Strong Experience with Java, Springboot, DevOps, and Agile based Development. Good knowledge of IBM WebSphere / MQ clustering and administration Good knowledge of IBM ITX including Design studio, setup, and implementation. Experience with deploying IBM ITX/WTX (WebSphere transformation extender) and IBM MQ in Kubernetes containers. Experience with cloud-based environments (AWS, GCP, or Azure) and associated container management tools. Desired Skills/Knowledge Working knowledge of containerization platforms such as Docker, and experience with Kubernetes orchestration. Should have good knowledge of RESTful design, SOAP APIs , and API specifications like Open API(Swagger) Strong working knowledge of the Financial Industry and Consumer Lending Desire to work in a dynamic, fast paced environment. Excellent interpersonal skills with ability to influence clients, team members, management, and external groups. Eligibility Criteria Bachelor’s Degree and 5+ years relevant experience in Information Technology, or in lieu of a degree 7+ years relevant experience in information Technology. Work Timings: 2:00 PM to 11:00 PM IST This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying Inform your manager and HRM before applying for any role on Workday Ensure that your professional profile is updated (fields such as education, prior experience, other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal) L8+ Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. L8+ Employees can apply Grade/Level: 10 Job Family Group Information Technology Show more Show less
Posted 3 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Job Summary Fiche de poste : UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Summary: UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Summary: UPS Enterprise Data Analytics team is looking for a talented and motivated Data Scientist to use statistical modelling, state of the art AI tools and techniques to solve complex and large-scale business problems for UPS operations. This role would also support debugging and enhancing existing AI applications in close collaboration with the Machine Learning Operations team. This position will work with multiple stakeholders across different levels of the organization to understand the business problem, develop and help implement robust and scalable solutions. You will be in a high visibility position with the opportunity to interact with the senior leadership to bring forth innovation within the operational space for UPS. Success in this role requires excellent communication to be able to present your cutting-edge solutions to both technical and business leaderships. Responsibilities Become a subject matter expert on UPS business processes and data to help define and solve business needs using data, advanced statistical methods and AI Be actively involved in understanding and converting business use cases to technical requirements for modelling. Query, analyze and extract insights from large-scale structured and unstructured data from different data sources utilizing different platforms, methods and tools like BigQuery, Google Cloud Storage, etc. Understand and apply appropriate methods for cleaning and transforming data, engineering relevant features to be used for modelling. Actively drive modelling of business problem into ML/AI models, work closely with the stakeholders for model evaluation and acceptance. Work closely with the MLOps team to productionize new models, support enhancements and resolving any issues within existing production AI applications. Prepare extensive technical documentation, dashboards and presentations for technical and business stakeholders including leadership teams. Qualifications Expertise in Python, SQL. Experienced in using data science-based packages like scikit-learn, numpy, pandas, tensorflow, keras, statsmodels, etc. Strong understanding of statistical concepts and methods (like hypothesis testing, descriptive stats, etc.), machine learning techniques for regression, classification, clustering problems, including neural networks and deep learning. Proficient in using GCP tools like Vertex AI, BigQuery, GCS, etc. for model development and other activities in the ML lifecycle. Strong ownership and collaborative qualities in the relevant domain. Takes initiative to identify and drive opportunities for improvement and process streamline. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audience. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications NLP, Gen AI, LLM knowledge/experience Knowledge of Operations Research methodologies and experience with packages like CPLEX, PULP, etc. Knowledge and experience in MLOps principles and tools in GCP. Experience working in an Agile environment, understanding of Lean Agile principles. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About the job HERE Technologies is a location data and technology platform company. We empower our customers to achieve better outcomes – from helping a city manage its infrastructure or a business optimize its assets to guiding drivers to their destination safely. At HERE we take it upon ourselves to be the change we wish to see. We create solutions that fuel innovation, provide opportunity and foster inclusion to improve people’s lives. If you are inspired by an open world and driven to create positive change, join us. Learn more about us on our YouTube Channel. In this this position you will part of HERE’s Places Ingestion team, which is responsible for discovering Points Of Interest (Places) by processing large volumes of raw data from a variety of sources to improve the content coverage, accuracy, and freshness. You will be part of an energetic and dedicated team that works on challenging tasks in distributed processing of large data & streaming technologies. In addition to the technical challenges this position offers, you will have every opportunity to expand your career both technically and personally in this role. Whats the role: You will help design and build the next iteration of processes to improve quality of Place attributes employing machine learning. You will maintain up-to-date knowledge of research activities in the general fields of machine learning and LLMs. Utilize machine learning algorithms/LLMs to generate translation/transliterations, standardization/derivation rules, extract place attributes such as name, address, category and hours of operations from web sites using web scraping solutions. Participate in both algorithm and software developments as a part of a scrum team, and contribute artifacts (software, white-paper, datasets) for project reviews and demos. Collaborate with internal and external team members (researchers and engineers) on expertly implementing the new features to the products or enhancing the existing features. With end-to-tend aspects like developing, testing, and deploying. Who Are you? You are determined and have the following to be successful in the role: MS or PhD in a discipline such as Statistics, Applied Mathematics, Computer Science, Data Science, or others with an emphasis or thesis work on one or more of the following areas: statistics/science/engineering, data analysis, machine learning, LLMs 3+ years of experience in Data Science field. Proficient with at least one of the deep learning frameworks like Tensorflow, Keras and Pytorch. Programming experience with Python, shell script Applied statistics or experimentation (i.e. A/B testing, root cause analysis, etc) Unsupervised Machine learning methods (i.e. clustering, Bayesian, etc) HERE is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, age, gender identity, sexual orientation, marital status, parental status, religion, sex, national origin, disability, veteran status, and other legally protected characteristics Show more Show less
Posted 3 days ago
4.0 - 8.0 years
6 - 10 Lacs
Pune
Work from Office
RabbitMQ Administrator - Prog Leasing1 Job TitleRabbitMQ Cluster Migration Engineer Job Summary: We are seeking an experienced RabbitMQ Cluster Migration Engineer to lead and execute the seamless migration of our existing RabbitMQ infrastructure to a AWS - new high-availability cluster environment. This role requires deep expertise in RabbitMQ, clustering, messaging architecture, and production-grade migrations with minimal downtime. Key Responsibilities: Design and implement a migration plan to move existing RabbitMQ instances to a new clustered setup. Evaluate the current messaging architecture, performance bottlenecks, and limitations. Configure, deploy, and test RabbitMQ clusters (with or without federation/mirroring as needed). Ensure high availability, fault tolerance, and disaster recovery configurations. Collaborate with development, DevOps, and SRE teams to ensure smooth cutover and rollback plans. Automate setup and configuration using tools such as Ansible, Terraform, or Helm (for Kubernetes). Monitor message queues during migration to ensure message durability and delivery guarantees. Document all aspects of the architecture, configurations, and migration process. Required Qualifications: Strong experience with RabbitMQ, especially in clustered and high-availability environments. Deep understanding of RabbitMQ internalsqueues, exchanges, bindings, vhosts, federation, mirrored queues. Experience with RabbitMQ management plugins, monitoring, and performance tuning. Proficiency with scripting languages (e.g., Bash, Python) for automation. Hands-on experience with infrastructure-as-code tools (e.g., Ansible, Terraform, Helm). Familiarity with containerization and orchestration (e.g., Docker, Kubernetes). Strong understanding of messaging patterns and guarantees (at-least-once, exactly-once, etc.). Experience with zero-downtime migration and rollback strategies. Preferred Qualifications: Experience migrating RabbitMQ clusters in production environments. Working knowledge of cloud platforms (AWS, Azure, or GCP) and managed RabbitMQ services. Understanding of security in messaging systems (TLS, authentication, access control). Familiarity with alternative messaging systems (Kafka, NATS, ActiveMQ) is a plus.
Posted 3 days ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description: Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 4 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, Agentic Framework to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 4 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models Utilize optimization tools and techniques, including MIP (Mixed Integer Programming. Deep knowledge of classical AIML (regression, classification, time series, clustering) Drive DevOps and MLOps practices, covering CI/CD and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 3 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You’ll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you’ll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you’ll tackle obstacles related to database integration and untangle complex, unstructured data sets. In This Role, Your Responsibilities May Include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Preferred Education Master's Degree Required Technical And Professional Expertise Expertise in designing and implementing scalable data warehouse solutions on Snowflake, including schema design, performance tuning, and query optimization. Strong experience in building data ingestion and transformation pipelines using Talend to process structured and unstructured data from various sources. Proficiency in integrating data from cloud platforms into Snowflake using Talend and native Snowflake capabilities. Hands-on experience with dimensional and relational data modelling techniques to support analytics and reporting requirements Preferred Technical And Professional Experience Understanding of optimizing Snowflake workloads, including clustering keys, caching strategies, and query profiling. Ability to implement robust data validation, cleansing, and governance frameworks within ETL processes. Proficiency in SQL and/or Shell scripting for custom transformations and automation tasks Show more Show less
Posted 3 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Persistent We are an AI-led, platform-driven Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world, including 12 of the 30 most innovative global companies, 60% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our disruptor’s mindset, commitment to client success, and agility to thrive in the dynamic environment have enabled us to sustain our growth momentum by reporting $1,409.1M revenue in FY25, delivering 18.8% Y-o-Y growth. Our 23,900+ global team members, located in 19 countries, have been instrumental in helping the market leaders transform their industries. We are also pleased to share that Persistent won in four categories at the prestigious 2024 ISG Star of Excellence™ Awards , including the Overall Award based on the voice of the customer. We were included in the Dow Jones Sustainability World Index, setting high standards in sustainability and corporate responsibility. We were awarded for our state-of-the-art learning and development initiatives at the 16 th TISS LeapVault CLO Awards. In addition, we were cited as the fastest-growing IT services brand in the 2024 Brand Finance India 100 Report. Throughout our market-leading growth, we’ve maintained a strong employee satisfaction score of 8.2/10. At Persistent, we embrace diversity to unlock everyone's potential. Our programs empower our workforce by harnessing varied backgrounds for creative, innovative problem-solving. Our inclusive environment fosters belonging, encouraging employees to unleash their full potential. For more details please login to www.persistent.com About The Position We are looking for a DevOps Lead Engineer to be responsible for creating software deployment strategies that are essential for the successful deployment of software in the work environment. You will identify and implement data storage methods like clustering to improve the performance of the team. What you?ll do Manage a group of highly motivated DevOps engineers and systems administrators Participate in the agile ceremonies and interface with the agile team(s) and other program staff as required Work with application teams to help them adopt continuous build, inspection, testing and deployment Participate in all aspects of DevOps engineering and promote industry standard methodologies in DevOps engineering Migrate code from TFS to Azure DevOps Help to configure DevOps stack with regards to performance monitoring, analytics, and auditability Design and build a new code production pipeline Developing ?Idealized? automated CI / CD processes and working with teams to implement those processes in SSGA?s DevOps technology stack Provide deployment and occasional off hours support Analyze existing standards to identify gaps and remedies. Evaluate gaps related to DevOps best practices Develop and maintain installation, configuration and operations procedures Develop Junit tests to support code coverage as part of the CI / CD pipeline Share best practices with a focus on re-use of application code Work with the development, project / product management organizations to align projects, releases, patches, and other efforts Implement automation tools and frameworks (CI / CD pipelines) Expertise you?ll bring Qualifications: Bachelor?s Degree in Computer Science, Computer Engineering or a closely related field. A Bachelor?s degree in Computer Science is preferable while a Master?s degree will carry a lot more weight. Experience: 8+ years working in the related field. Additionally, experience in the following: Automating and orchestrating workloads for large-scale enterprise Java applications using Ansible Working with Cloud solutions at massive scale and resiliency. Deploying updates and fixes Developing scripts to automate visualization Writing scripts and automation using Perl / Python / Groovy / Java / Bash. Shell scripting, Python, Groovy, etc Good to have skills PostgreSQL, MySQL, NoSQL, and / or Cassandra Migrating application to AWS cloud; AWS certifications Test Driven Development Knowledge: Ruby or Python Build tools like Ant, Maven, and Gradle ? including configuring & adopting Scaled Agile Framework (SAFe) practices and tools; Certification in Agile delivery (e.g., SAFe Agilist) Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. Inclusive Environment We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent - persistent.com/careers Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Junior Data Scientist Location: Bangalore Reporting to: Senior Manager – Analytics Purpose of the role The Global GenAI Team at Anheuser-Busch InBev (AB InBev) is tasked with constructing competitive solutions utilizing GenAI techniques. These solutions aim to extract contextual insights and meaningful information from our enterprise data assets. The derived data-driven insights play a pivotal role in empowering our business users to make well-informed decisions regarding their respective products. In the role of a Machine Learning Engineer (MLE), you will operate at the intersection of: LLM-based frameworks, tools, and technologies Cloud-native technologies and solutions Microservices-based software architecture and design patterns As an additional responsibility, you will be involved in the complete development cycle of new product features, encompassing tasks such as the development and deployment of new models integrated into production systems. Furthermore, you will have the opportunity to critically assess and influence the product engineering, design, architecture, and technology stack across multiple products, extending beyond your immediate focus. Key tasks & accountabilities Large Language Models (LLM): Experience with LangChain, LangGraph Proficiency in building agentic patterns like ReAct, ReWoo, LLMCompiler Multi-modal Retrieval-Augmented Generation (RAG): Expertise in multi-modal AI systems (text, images, audio, video) Designing and optimizing chunking strategies and clustering for large data processing Streaming & Real-time Processing: Experience in audio/video streaming and real-time data pipelines Low-latency inference and deployment architectures NL2SQL: Natural language-driven SQL generation for databases Experience with natural language interfaces to databases and query optimization API Development: Building scalable APIs with FastAPI for AI model serving Containerization & Orchestration: Proficient with Docker for containerized AI services Experience with orchestration tools for deploying and managing services Data Processing & Pipelines: Experience with chunking strategies for efficient document processing Building data pipelines to handle large-scale data for AI model training and inference AI Frameworks & Tools: Experience with AI/ML frameworks like TensorFlow, PyTorch Proficiency in LangChain, LangGraph, and other LLM-related technologies Prompt Engineering: Expertise in advanced prompting techniques like Chain of Thought (CoT) prompting, LLM Judge, and self-reflection prompting Experience with prompt compression and optimization using tools like LLMLingua, AdaFlow, TextGrad, and DSPy Strong understanding of context window management and optimizing prompts for performance and efficiency 3. Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) Bachelor's or masterʼs degree in Computer Science, Engineering, or a related field. Previous Work Experience Required Proven experience of 3+ years in developing and deploying applications utilizing Azure OpenAI and Redis as a vector database. Technical Skills Required Solid understanding of language model technologies, including LangChain, OpenAI Python SDK, LammaIndex, OLamma, etc. Proficiency in implementing and optimizing machine learning models for natural language processing. Experience with observability tools such as mlflow, langsmith, langfuse, weight and bias, etc. Strong programming skills in languages such as Python and proficiency in relevant frameworks. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). And above all of this, an undying love for beer! We dream big to create future with more cheer Show more Show less
Posted 3 days ago
170.0 years
0 Lacs
Mulshi, Maharashtra, India
On-site
Area(s) of responsibility Empowered By Innovation Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Role: AI Consultant Location: Bangalore, Pune, Noida Experience: 6-9 years AI Consultant to enhance and apply Birlasoft’s AI Parts Classification tool, generating Product Category and enriching attributes. The role requires AI/ML expertise, industrial parts knowledge, and stakeholder collaboration. Key Responsibilities Analyze Birlasoft’s AI Parts Classification tool and identify gaps Enhance AI/ML models (e.g., NLP, clustering, GenAI) Fine-tune models for accuracy and scalability Validate AI outputs against business requirements, iterating as needed. Work with stakeholders to validate attributes and refine AI outputs. Ensure AI outputs integrate into the project pipeline, applying business rules to prioritize existing values. Document enhancements, processes, and validation insights for reports. Skills And Qualifications 7+ years in AI-driven data classification, ideally in industrial parts/MRO domains. Proven ability to enhance AI/ML tools for business-specific needs. Expertise in AI/ML (NLP, clustering, supervised/unsupervised learning). Experience with AI classification tools and programming (Python/R). Knowledge of industrial standards (e.g., UNSPSC, eClass). Ability to translate business needs into AI classification tasks. Strong stakeholder management and communication skills. Education Bachelor’s/Master’s in Computer Science, Data Science, AI, or related field. AI/ML certification (e.g., TensorFlow, AWS ML) is a plus. Show more Show less
Posted 3 days ago
3.0 - 5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
About NCR Atleos NCR Atleos, headquartered in Atlanta, is a leader in expanding financial access. Our dedicated 20,000 employees optimize the branch, improve operational efficiency and maximize self-service availability for financial institutions and retailers across the globe. Job Title: System Administrator (Windows, VMWare) Job Location: Mumbai, India Job Description The System Administrator will be responsible for managing and maintaining the company's IT infrastructure, with a focus on Windows Server environments and VMware virtualization technology. This role requires deep technical knowledge, along with 3-5 years of relevant experience. Key Responsibilities Manage and maintain the VMware virtual environment, including VM provisioning, adding disk space, and making VM changes. Administer Windows Server environments (2012/2016/2019/2022) including system installation, configuration, and troubleshooting. Ensure system availability and performance through monitoring and capacity planning. Manage backups and restore protocols to ensure data integrity and availability. Implement and manage high-availability solutions such as failover clustering, replication, and load balancing. Perform system updates and upgrades, as well as hardware and software migrations. Work on support tickets and provide technical support and guidance to users and other administrators. Document procedures, systems configurations, and network layouts. Support Active Directory, including making DNS changes, creating service accounts, and troubleshooting Active Directory services Qualifications Bachelor’s degree in computer science, Information Technology, or a similar field, or equivalent experience. 3-5 years of proven experience as a System Administrator in a similar role. Strong knowledge of Windows Server operating systems. Proficiency in VMware ESXi and vCenter management. Familiarity with backup and recovery software. Excellent problem-solving and communication skills. Certifications like VCP (VMware Certified Professional) or MCSE (Microsoft Certified Solutions Expert) are a plus. Work Environment This is a full-time position. Candidates must work at least 4 days a week from the office. Candidates should be ready to work in shifts. This team supports 24X7 on a rotating schedule. Offers of employment are conditional upon passage of screening criteria applicable to the job. EEO Statement NCR Atleos is an equal-opportunity employer. It is NCR Atleos policy to hire, train, promote, and pay associates based on their job-related qualifications, ability, and performance, without regard to race, color, creed, religion, national origin, citizenship status, sex, sexual orientation, gender identity/expression, pregnancy, marital status, age, mental or physical disability, genetic information, medical condition, military or veteran status, or any other factor protected by law. Statement to Third Party Agencies To ALL recruitment agencies: NCR Atleos only accepts resumes from agencies on the NCR Atleos preferred supplier list. Please do not forward resumes to our applicant tracking system, NCR Atleos employees, or any NCR Atleos facility. NCR Atleos is not responsible for any fees or charges associated with unsolicited resumes. Show more Show less
Posted 3 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
hackajob is collaborating with Wipro to connect them with exceptional tech professionals for this role. Title: Postgre-SQL database Administration Requisition ID: 64234 City: Chennai Country/Region: IN Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Role Purpose The purpose of this role is to provide significant technical expertise in architecture planning and design of the concerned tower (platform, database, middleware, backup etc) as well as managing its day-to-day operations ͏ 5+ years of experience in PostgreSQL Database Administration Design, configure, test, and support high-availability clustering utilizing PostgreSQL technologies Responsible for understanding operational requirements including Hardware, Architecture, configuration, Integration and maintaining mission-critical Production PostgreSQL databases. Responsible for all Logical and Physical backup & recovery even PITR recovery. Deep understanding about streaming replication and barman backups ecovery. Should poses knowledge on authentication and privilege management. Manage Clusters and upgrading/migrating various PostgreSQL database version. Performance Tuning, Replication, and develop database automation scripts and maintenance. Proficiency logical and physical design of database, monitoring and troubleshooting. Knowledge in DB2 LUW and Mongodb administration is added advantage Provide technical guidance for integration, testing, design, development, planning of new production systems/databases. ͏ Team Management Resourcing Forecast talent requirements as per the current and future business needs Hire adequate and right resources for the team Train direct reportees to make right recruitment and selection decisions Talent Management Ensure 100% compliance to Wipro’s standards of adequate onboarding and training for team members to enhance capability & effectiveness Build an internal talent pool of HiPos and ensure their career progression within the organization Promote diversity in leadership positions Performance Management Set goals for direct reportees, conduct timely performance reviews and appraisals, and give constructive feedback to direct reports. Ensure that organizational programs like Performance Nxt are well understood and that the team is taking the opportunities presented by such programs to their and their levels below Employee Satisfaction and Engagement Lead and drive engagement initiatives for the team Track team satisfaction scores and identify initiatives to build engagement within the team Proactively challenge the team with larger and enriching projects/ initiatives for the organization or team Exercise employee recognition and appreciation ͏ Deliver No Performance Parameter Measure 1 Operations of the tower SLA adherence Knowledge management CSAT/ Customer Experience Identification of risk issues and mitigation plans Knowledge management 2 New projects Timely delivery Avoid unauthorised changes No formal escalations Mandatory Skills: PostgreSQL Database Admin . Experience: 5-8 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less
Posted 3 days ago
7.0 years
4 - 6 Lacs
Hyderābād
On-site
We are the leading provider of professional services to the middle market globally, our purpose is to instill confidence in a world of change, empowering our clients and people to realize their full potential. Our exceptional people are the key to our unrivaled, inclusive culture and talent experience and our ability to be compelling to our clients. You’ll find an environment that inspires and empowers you to thrive both personally and professionally. There’s no one like you and that’s why there’s nowhere like RSM. Supervisor, Data Analytics & BI Engineer This role demands expertise in as a BI Development with 7-10 years of experience, and We are seeking a results-driven Business Intelligence (BI) Developer with strong expertise in Tableau, Alteryx, AI-enhanced analytics, and Robotic Process Automation (RPA) tools (e.g., UiPath, Power Automate). The ideal candidate will design and implement end-to-end data pipelines, build insightful dashboards, and automate manual processes using RPA and intelligent workflows and a strong understanding of Agile methodologies. The ideal candidate will play a pivotal role in designing and implementing advanced visualizations and reporting dashboards Agile tools like Jira, Confluence, and Gliffy will be advantageous. Essential Duties Design, develop, and maintain interactive dashboards and reports using Tableau. Build and optimize data workflows using Alteryx Designer and Server. Integrate AI and ML features into BI processes for advanced analytics (e.g., sentiment analysis, forecasting). Work closely with business stakeholders to translate requirements into actionable insights. Ensure data quality, accuracy, and consistency across BI solutions. Work in an Agile environment, participating in sprint planning, stand-ups, and other Agile ceremonies to align development activities with release cycles. Optimize performance and user experience for BI applications and dashboards. Utilizing tools like Jira, Confluence, and Gliffy for efficient management and communication. EDUCATION/CERTIFICATIONS Bachelor's degree in computer science, Engineering, or a related field. EXPERIENCE 8-10+ years of extensive experience as a BI Developer with strong expertise in Tableau, Alteryx, AI-enhanced analytics, and Robotic Process Automation (RPA) tools (e.g., UiPath, Power Automate). TECHNICAL/SOFT SKILLS Tableau Certification (Desktop Specialist or above). Alteryx Core or Advanced Certification. Experience with cloud platforms (Azure, AWS, or GCP). Knowledge of Tableau AI capabilities (Pulse, Einstein Discovery, GPT-augmented insights). Familiarity with Python, R, or Power BI (a plus). Familiarity with Git and CI/CD workflows (e.g., GitHub Actions, Azure DevOps). Exposure to Agile/Scrum methodologies. Alteryx Designer (data wrangling, workflows, macros), Alteryx Server/Gallery (job scheduling, sharing workflows) Integration of AI features in Tableau (e.g., forecasting, clustering, natural language queries) Use of Alteryx predictive tools or Python/R scripts Experience with tools like: UiPath, Microsoft Power Automate, Automation Anywhere (optional) LEADERSHIP SKILLS Must : BI experience with expertise in tools like Tableau or PowerBI or QlikSense and wrangling tools such as Alteryx or Tableau Prep. Preferred : Exposure to Agile/Scrum methodologies. Alteryx Designer (data wrangling, workflows, macros), Alteryx Server/Gallery (job scheduling, sharing workflows) Integration of AI features in Tableau (e.g., forecasting, clustering, natural language queries) Use of Alteryx predictive tools or Python/R scripts Experience with tools like: UiPath, Microsoft Power Automate, Automation Anywhere (optional) At RSM, we offer a competitive benefits and compensation package for all our people. We offer flexibility in your schedule, empowering you to balance life’s demands, while also maintaining your ability to serve clients. Learn more about our total rewards at https://rsmus.com/careers/india.html . RSM does not tolerate discrimination and/or harassment based on race; colour; creed; sincerely held religious beliefs, practices or observances; sex (including pregnancy or disabilities related to nursing); gender (including gender identity and/or gender expression); sexual orientation; HIV Status; national origin; ancestry; familial or marital status; age; physical or mental disability; citizenship; political affiliation; medical condition (including family and medical leave); domestic violence victim status; past, current or prospective service in the Indian Armed Forces; Indian Armed Forces Veterans, and Indian Armed Forces Personnel status ; pre-disposing genetic characteristics or any other characteristic protected under applicable provincial employment legislation. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process and/or employment/partnership. RSM is committed to providing equal opportunity and reasonable accommodation for people with disabilities. If you require a reasonable accommodation to complete an application, interview, or otherwise participate in the recruiting process, please send us an email at careers@rsmus.com .
Posted 3 days ago
6.0 years
0 Lacs
India
On-site
JOB DESCRIPTION Key Responsibilities: Prior experience in migrating from IBM DataStage to DBT and BigQuery or similar data migration activities into the cloud solutions. Design and implement modular, testable, and scalable DBT models aligned with business logic and performance needs. Optimize and manage BigQuery datasets, partitioning, clustering, and cost-efficient querying. Collaborate with stakeholders to understand existing pipelines and translate them into modern ELT workflows. Establish best practices for version control, CI/CD, testing, and documentation in DBT. Provide technical leadership and mentorship to team members during the migration process. Ensure high standards of data quality, governance, and security. Required Qualifications: 6+ years of experience in data engineering, with at least 3+ years hands-on with DBT and BigQuery. Strong understanding of SQL, data warehousing, and ELT architecture. Experience with data modeling (especially dimensional modeling) and performance tuning in BigQuery. Familiarity with legacy ETL tools like IBM DataStage and ability to reverse-engineer existing pipelines. Proficiency in Git, CI/CD pipelines, and dataOps practices. Excellent communication skills and ability to work independently and collaboratively. Preferred Qualifications: Experience in cloud migration projects (especially GCP). Knowledge of data governance, access control, and cost optimization in BigQuery. Exposure to orchestration tools like Airflow. Familiarity with Agile methodologies and cross-functional team collaboration. Show more Show less
Posted 3 days ago
0 years
8 - 9 Lacs
Hyderābād
On-site
India Research Investment Bank Job Reference # 318877BR City Hyderabad Job Type Full Time Your role Do you have sharp analytic skills? Do you know how to solve problems and develop innovative solutions? Do you enjoy responsibility and independent work? We’re looking for Junior Data Analyst with practical knowledge in Python to: able to apply data science and statistical techniques to real problems, well organized and dependable, able to turn large datasets into an asset across the organization, able to collect and combine data from multiple sources, analyze it for insights and produce great visuals Your team UBS Evidence Lab is the most experienced global team of alternative data experts. We’re a collection of data and software engineers, quantitative market researchers, social media experts, data pricing whizzes, and more. This diversity allows us to look at problems from differing perspectives and turn data into evidence. You’ll be working closely with various Data Analyst teams, which are a part of a street-leading primary research platform called Evidence Lab. Your role will be focused on ability to systematically analyze and build statistical models on companies and markets that Evidence Lab does research on. Your main goal is to give insights for better business decisions. Our offices are located Globally and you will be working closely with analysts based out of Poland, US, UK, and APAC. You would be required to work from UBS BSC Hyderabad (India). Your expertise data processing and programming skills in Python including knowledge of libraries used for data analysis / data science, experience in data analysis and data science techniques, knowledge of statistical and econometric techniques: time series (stationarity), clustering, regression, variable methods selection, out of sample testing, good practical understanding of SQL, understanding of NLP techniques would be a benefit, the ability to deliver under time pressure and work independently, Excellent written and oral English High degree of proactivity and creativity Willingness to collaborate and work in a team Ability to take ownership of projects and processes for continuous improvement Excellent attention to detail Flexible approach on working hours and work timings About us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.. We have a presence in all major financial centers in more than 50 countries. Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 3 days ago
0 years
0 - 0 Lacs
Thiruvananthapuram
On-site
Data Science and AI Developer **Job Description:** We are seeking a highly skilled and motivated Data Science and AI Developer to join our dynamic team. As a Data Science and AI Developer, you will be responsible for leveraging cutting-edge technologies to develop innovative solutions that drive business insights and enhance decision-making processes. **Key Responsibilities:** 1. Develop and deploy machine learning models for predictive analytics, classification, clustering, and anomaly detection. 2. Design and implement algorithms for data mining, pattern recognition, and natural language processing. 3. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. 4. Utilize advanced statistical techniques to analyze complex datasets and extract actionable insights. 5. Implement scalable data pipelines for data ingestion, preprocessing, feature engineering, and model training. 6. Stay updated with the latest advancements in data science, machine learning, and artificial intelligence research. 7. Optimize model performance and scalability through experimentation and iteration. 8. Communicate findings and results to stakeholders through reports, presentations, and visualizations. 9. Ensure compliance with data privacy regulations and best practices in data handling and security. 10. Mentor junior team members and provide technical guidance and support. **Requirements:** 1. Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. 2. Proven experience in developing and deploying machine learning models in production environments. 3. Proficiency in programming languages such as Python, R, or Scala, with strong software engineering skills. 4. Hands-on experience with machine learning libraries/frameworks such as TensorFlow, PyTorch, Scikit-learn, or Spark MLlib. 5. Solid understanding of data structures, algorithms, and computer science fundamentals. 6. Excellent problem-solving skills and the ability to think creatively to overcome challenges. 7. Strong communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. 8. Certification in Data Science, Machine Learning, or Artificial Intelligence (e.g., Coursera, edX, Udacity, etc.). 9. Experience with cloud platforms such as AWS, Azure, or Google Cloud is a plus. 10. Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) is an advantage. Data Manipulation and Analysis : NumPy, Pandas Data Visualization : Matplotlib, Seaborn, Power BI Machine Learning Libraries : Scikit-learn, TensorFlow, Keras Statistical Analysis : SciPy Web Scrapping : Scrapy IDE : PyCharm, Google Colab HTML/CSS/JavaScript/React JS Proficiency in these core web development technologies is a must. Python Django Expertise: In-depth knowledge of e-commerce functionalities or deep Python Django knowledge. Theming: Proven experience in designing and implementing custom themes for Python websites. Responsive Design: Strong understanding of responsive design principles and the ability to create visually appealing and user-friendly interfaces for various devices. Problem Solving: Excellent problem-solving skills with the ability to troubleshoot and resolve issues independently. Collaboration: Ability to work closely with cross-functional teams, including marketing and design, to bring creative visions to life. interns must know about how to connect front end with datascience Also must Know to connect datascience to frontend **Benefits:** - Competitive salary package - Flexible working hours - Opportunities for career growth and professional development - Dynamic and innovative work environment Job Type: Full-time Pay: ₹8,000.00 - ₹12,000.00 per month Schedule: Day shift Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person
Posted 3 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Reference # 318877BR Job Type Full Time Your role Do you have sharp analytic skills? Do you know how to solve problems and develop innovative solutions? Do you enjoy responsibility and independent work? We’re looking for Junior Data Analyst with practical knowledge in Python to: able to apply data science and statistical techniques to real problems, well organized and dependable, able to turn large datasets into an asset across the organization, able to collect and combine data from multiple sources, analyze it for insights and produce great visuals Your team UBS Evidence Lab is the most experienced global team of alternative data experts. We’re a collection of data and software engineers, quantitative market researchers, social media experts, data pricing whizzes, and more. This diversity allows us to look at problems from differing perspectives and turn data into evidence. You’ll be working closely with various Data Analyst teams, which are a part of a street-leading primary research platform called Evidence Lab. Your role will be focused on ability to systematically analyze and build statistical models on companies and markets that Evidence Lab does research on. Your main goal is to give insights for better business decisions. Our offices are located Globally and you will be working closely with analysts based out of Poland, US, UK, and APAC. You would be required to work from UBS BSC Hyderabad (India). Your expertise data processing and programming skills in Python including knowledge of libraries used for data analysis / data science, experience in data analysis and data science techniques, knowledge of statistical and econometric techniques: time series (stationarity), clustering, regression, variable methods selection, out of sample testing, good practical understanding of SQL, understanding of NLP techniques would be a benefit, the ability to deliver under time pressure and work independently, Excellent written and oral English High degree of proactivity and creativity Willingness to collaborate and work in a team Ability to take ownership of projects and processes for continuous improvement Excellent attention to detail Flexible approach on working hours and work timings About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.. We have a presence in all major financial centers in more than 50 countries. Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce. Show more Show less
Posted 3 days ago
3.0 years
0 Lacs
Chennai
On-site
Ford/GDIA Mission and Scope: At Ford Motor Company, we believe freedom of movement drives human progress. We also believe in providing you with the freedom to define and realize your dreams. With our incredible plans for the future of mobility, we have a wide variety of opportunities for you to accelerate your career potential as you help us define tomorrow’s transportation. Creating the future of smart mobility requires the highly intelligent use of data, metrics, and analytics. That’s where you can make an impact as part of our Global Data Insight & Analytics team. We are the trusted advisers that enable Ford to clearly see business conditions, customer needs, and the competitive landscape. With our support, key decision-makers can act in meaningful, positive ways. Join us and use your data expertise and analytical skills to drive evidence-based, timely decision-making. The Global Data Insights and Analytics (GDI&A) department at Ford Motors Company is looking for qualified people who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, Econometrics, and Optimization. The goal of GDI&A is to drive evidence-based decision making by providing insights from data. Applications for GDI&A include, but are not limited to, Connected Vehicle, Smart Mobility, Advanced Operations, Manufacturing, Supply chain, Logistics, and Warranty Analytics. About the Role: You will be part of the FCSD analytics team, playing a critical role in leveraging data science to drive significant business impact within Ford Customer Service Division. As a Data Scientist, you will translate complex business challenges into data-driven solutions. This involves partnering closely with stakeholders to understand problems, working with diverse data sources (including within GCP), developing and deploying scalable AI/ML models, and communicating actionable insights that deliver measurable results for Ford. Qualifications: At least 3 years of relevant professional experience applying data science techniques to solve business problems. This includes demonstrated hands-on proficiency with SQL and Python. Bachelor's or Master's degree in a quantitative field (e.g., Statistics, Computer Science, Mathematics, Engineering, Economics). Hands-on experience in conducting statistical data analysis (EDA, forecasting, clustering, hypothesis testing, etc.) and applying machine learning techniques (Classification/Regression, NLP, time-series analysis, etc.). Technical Skills: Proficiency in SQL, including the ability to write and optimize queries for data extraction and analysis. Proficiency in Python for data manipulation (Pandas, NumPy), statistical analysis, and implementing Machine Learning models (Scikit-learn, TensorFlow, PyTorch, etc.). Working knowledge in a Cloud environment (GCP, AWS, or Azure) is preferred for developing and deploying models. Experience with version control systems, particularly Git. Nice to have: Exposure to Generative AI / Large Language Models (LLMs). Functional Skills: Proven ability to understand and formulate business problem statements. Ability to translate Business Problem statements into data science problems. Strong problem-solving ability, with the capacity to analyze complex issues and develop effective solutions. Excellent verbal and written communication skills, with a demonstrated ability to translate complex technical information and results into simple, understandable language for non-technical audiences. Strong business engagement skills, including the ability to build relationships, collaborate effectively with stakeholders, and contribute to data-driven decision-making. Build an in-depth understanding of the business domain and data sources, demonstrating strong business acumen. Extract, analyze, and transform data using SQL for insights. Apply statistical methods and develop ML models to solve business problems. Design and implement analytical solutions, contributing to their deployment, ideally leveraging Cloud environments. Work closely and collaboratively with Product Owners, Product Managers, Software Engineers, and Data Engineers within an agile development environment. Integrate and operationalize ML models for real-world impact. Monitor the performance and impact of deployed models, iterating as needed. Present findings and recommendations effectively to both technical and non-technical audiences to inform and drive business decisions.
Posted 3 days ago
0 years
0 Lacs
Chennai
On-site
Expertise in handling large scale structured and unstructured data. Efficiently handled large-scale generative AI datasets and outputs. Familiarity in the use of Docker tools, pipenv/conda/poetry env Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Education : Bachelor’s in Engineering or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed. Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale
Posted 3 days ago
3.0 years
3 - 7 Lacs
Chennai
Remote
Mandatory Criteria: Valid Passport – candidate must hold a valid passport. Willingness to relocate and work onsite in a foreign country as required by the project. Location: Riyadh, Saudi Arabia Responsibilities We are seeking a highly skilled and motivated AI/ML Developer to join our dynamic team. The ideal candidate will have a strong background in machine learning, natural language processing (NLP), and deep learning, with a proven ability to develop and deploy AI/ML solutions. This role requires a deep understanding of AI/ML concepts, excellent programming skills, and the ability to work collaboratively in a fast-paced environment. Qualifications Design, develop, and implement AI/ML models and solutions. Collaborate with cross-functional teams to identify and solve complex business problems using AI/ML techniques. Develop and maintain machine learning pipelines, including data preprocessing, feature engineering, model training, evaluation, and deployment. Conduct experiments, analyze results, and iterate on models to improve performance. Stay up-to-date with the latest advancements in AI/ML, including new algorithms, techniques, and tools. Write clean, well-documented, and testable code. Deploy and monitor AI/ML models in production environments. Contribute to the development of AI/ML best practices and standards. Skills Minimum 3 years of experience in AI/ML development. Proficiency in Python and Go (Golang) programming languages. Programming Languages: Python, Go (Golang) Strong experience with machine Learning Frameworks: PyTorch, TensorFlow (optional), Keras (optional) Solid understanding of NLP concepts and techniques. Experience with data manipulation and analysis tools (e.g., Pandas, NumPy). Experience with cloud platforms (e.g., AWS, Azure, GCP) is a plus. Experience with machine learning algorithms (e.g., regression, classification, clustering, etc.). Version Control: Git Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Bachelor's or Master's degree in Computer Science, Artificial Intelligence, or a related field. Job Type: Full-time Pay: ₹300,000.00 - ₹700,000.00 per year Benefits: Paid sick time Paid time off Work from home Schedule: Day shift Application Question(s): Are you willing to relocate to Riyadh, Saudi Arabia? Do you have a valid passport? Do you have expertise in Python? How familiar are you in working with AI and ML on a scale of 1 to 10 Work Location: In person
Posted 3 days ago
3.0 years
6 - 19 Lacs
India
On-site
As Machine Learning Engineer, you’ll be applying your expertise to help us develop a world-leading capability in this exciting and challenging domain. You will be responsible for contributing to the design, development, deployment, testing, maintenance and enhancement of ML software solutions. Primary responsibilities: 1. Applying machine learning, deep learning, and signal processing on large datasets (Audio, sensors, images, videos, text) to develop models. 2. Architecting large scale data analytics / modeling systems. 3. Designing and programming machine learning methods and integrating them into our ML framework / pipeline. 4. Work closely with data scientists/analyst to collaborate and support the development of ML data pipelines, platforms and infrastructure 5. Evaluate and validate the analysis with statistical methods. Also presenting this in a lucid form to people not familiar with the domain of data science / computer science. Creation of microservices and APIs for serving ML models and ML services 7. Evaluating new machine learning methods and adopting them for our purposes. 8. Feature engineering to add new features that improve model performance. Required skills: 1. Background and knowledge of recent advances in machine learning, deep learning, natural language processing, and/or image/signal/video processing with 3+ years of professional work experience working on real-world applications. 2. Strong programming background, e.g. Python, Pytorch, MATLAB, C/C++, Java, and knowledge of software engineering concepts (OOP, design patterns). 3. Knowledge of machine learning libraries Tensorflow, Keras, scikit-learn, pyTorch, 4. Excellent mathematical and skills and background, e.g. accuracy, significance tests, visualization, advanced probability concepts 5. Architecting and implementing end-to-end solutions for accelerating experimentation and model building 6. Working knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) 7. Ability to perform both independent and collaborative research. 8. Excellent written and spoken communication skills. Preferred qualification and experience: B.E.\B. Tech\B.S. candidates' entries with 3+ years of experience in the aforementioned fields will be considered. M.E.\M.S.\M. Tech\PhD preferably in fields related to Computer Science with experience in machine learning, image and signal processing, or statistics preferred. Job Types: Full-time, Permanent Pay: ₹668,717.16 - ₹1,944,863.46 per year Benefits: Flexible schedule Paid sick time Paid time off Schedule: Day shift Supplemental Pay: Yearly bonus Ability to commute/relocate: Naranpura, Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Preferred) Experience: AI/ML: 3 years (Required) Work Location: In person
Posted 3 days ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Junior Data Scientist Location: Bangalore Reporting to: Senior Manager – Analytics Purpose of the role The Global GenAI Team at Anheuser-Busch InBev (AB InBev) is tasked with constructing competitive solutions utilizing GenAI techniques. These solutions aim to extract contextual insights and meaningful information from our enterprise data assets. The derived data-driven insights play a pivotal role in empowering our business users to make well-informed decisions regarding their respective products. In the role of a Machine Learning Engineer (MLE), you will operate at the intersection of: LLM-based frameworks, tools, and technologies Cloud-native technologies and solutions Microservices-based software architecture and design patterns As an additional responsibility, you will be involved in the complete development cycle of new product features, encompassing tasks such as the development and deployment of new models integrated into production systems. Furthermore, you will have the opportunity to critically assess and influence the product engineering, design, architecture, and technology stack across multiple products, extending beyond your immediate focus. Key tasks & accountabilities Large Language Models (LLM): Experience with LangChain, LangGraph Proficiency in building agentic patterns like ReAct, ReWoo, LLMCompiler Multi-modal Retrieval-Augmented Generation (RAG): Expertise in multi-modal AI systems (text, images, audio, video) Designing and optimizing chunking strategies and clustering for large data processing Streaming & Real-time Processing: Experience in audio/video streaming and real-time data pipelines Low-latency inference and deployment architectures NL2SQL: Natural language-driven SQL generation for databases Experience with natural language interfaces to databases and query optimization API Development: Building scalable APIs with FastAPI for AI model serving Containerization & Orchestration: Proficient with Docker for containerized AI services Experience with orchestration tools for deploying and managing services Data Processing & Pipelines: Experience with chunking strategies for efficient document processing Building data pipelines to handle large-scale data for AI model training and inference AI Frameworks & Tools: Experience with AI/ML frameworks like TensorFlow, PyTorch Proficiency in LangChain, LangGraph, and other LLM-related technologies Prompt Engineering: Expertise in advanced prompting techniques like Chain of Thought (CoT) prompting, LLM Judge, and self-reflection prompting Experience with prompt compression and optimization using tools like LLMLingua, AdaFlow, TextGrad, and DSPy Strong understanding of context window management and optimizing prompts for performance and efficiency 3. Qualifications, Experience, Skills Level of educational attainment required (1 or more of the following) Bachelor's or masterʼs degree in Computer Science, Engineering, or a related field. Previous Work Experience Required Proven experience of 1+ years in developing and deploying applications utilizing Azure OpenAI and Redis as a vector database. Technical Skills Required Solid understanding of language model technologies, including LangChain, OpenAI Python SDK, LammaIndex, OLamma, etc. Proficiency in implementing and optimizing machine learning models for natural language processing. Experience with observability tools such as mlflow, langsmith, langfuse, weight and bias, etc. Strong programming skills in languages such as Python and proficiency in relevant frameworks. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). And above all of this, an undying love for beer! We dream big to create future with more cheer Show more Show less
Posted 3 days ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
JD for data science: We are seeking an experienced Data Scientist to join our growing analytics and AI team. This role will involve working closely with cross-functional teams to deliver actionable insights, build predictive models, and drive data-driven decision-making across the organization. The ideal candidate is someone who combines strong analytical skills with hands-on experience in statistical modeling, machine learning, and data engineering best practices. Key Responsibilities: Understand business problems and translate them into data science solutions. Build, validate, and deploy machine learning models for prediction, classification, clustering, etc. Perform deep-dive exploratory data analysis and uncover hidden insights. Work with large, complex datasets from multiple sources; perform data cleaning and preprocessing. Design and run A/B tests and experiments to validate hypotheses. Collaborate with data engineers, business analysts, and product managers to drive initiatives from ideation to production. Present results and insights to non-technical stakeholders in a clear, concise manner. Contribute to the development of reusable code libraries, templates, and documentation. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Statistics, Mathematics, Engineering, or a related field. 3–7 years of hands-on experience in data science, machine learning, or applied statistics. Proficiency in Python or R, and hands-on experience with libraries such as scikit- learn, pandas, numpy, XGBoost, TensorFlow/PyTorch. Solid understanding of machine learning algorithms, statistical inference, and data mining techniques. Strong SQL skills; experience working with large-scale databases (e.g., Snowflake, BigQuery, Redshift). Experience with data visualization tools like Power BI, Tableau, or Plotly. Working knowledge of cloud platforms like AWS, Azure, or GCP is preferred. Familiarity with MLOps tools and model deployment best practices is a plus. Preferred Qualifications: Exposure to time series analysis, NLP, or deep learning techniques. Experience working in domains like healthcare, fintech, retail, or supply chain. Understanding of version control (Git) and Agile development methodologies. Why Join Us: Opportunity to work on impactful, real-world problems. Be part of a high-performing and collaborative team. Exposure to cutting-edge technologies in data and AI. Career growth and continuous learning environment. Show more Show less
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2