Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a Senior Data Scientist to join our team and drive innovation by leveraging your expertise in statistical data analysis, machine learning, and NLP to create and deliver impactful AI solutions. As a Senior Data Scientist, you will work on challenging projects that require end-to-end involvement, from data preparation to model deployment, all while working collaboratively with cross-functional teams and delivering production-ready solutions. Responsibilities Develop, implement, and evaluate AI solutions, including classification, clustering, anomaly detection, and NLP Apply advanced statistical techniques and machine learning algorithms to solve complex business problems Utilize Python and SQL to write production-level code and perform comprehensive data analysis Implement model development workflows, including ML Ops, and feature engineering techniques Utilize Azure AI Search and other tools to make data and models accessible to stakeholders Collaborate with software development and project management teams, leveraging version control tools like GitLab and project tracking software like Jira Optimize data pipelines and model performance for real-world applications Communicate technical concepts effectively to both technical and non-technical audiences Stay updated on emerging technologies, applying a problem-solving mindset to integrate them into projects Ensure adherence to Agile development practices and maintain fluency in UNIX command line operations Requirements 4+ years of experience in Data Science Proficiency in statistical data analysis, machine learning, and NLP with practical applications and limitations Expertise in Python programming and SQL, with experience in data analysis libraries and production-level code Background in developing AI solutions, including classification, clustering, anomaly detection, or NLP Familiarity with ML Ops and feature engineering techniques, with hands-on experience in model workflows Flexibility to use tools like Azure AI Search to make models accessible for business use Competency in software development methodologies and code versioning tools such as GitLab Knowledge of project management tools such as Jira and Agile development practices Qualification in working with UNIX command line and problem-solving with innovative technologies B2 level of English or higher, with an emphasis on technical communication skills Nice to have Familiarity with Cloud Computing, Big Data tools, and/or containerization technologies Proficiency in data visualization tools for clear communication of insights
Posted 1 week ago
9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are looking for a Lead Data Scientist to join our collaborative team. You will play a key role in developing and implementing AI solutions across various applications, from statistical analysis to natural language processing. If you are passionate about leveraging data to create impactful solutions, we encourage you to apply. Responsibilities Develop and implement AI solutions including classification, clustering, and anomaly detection Conduct statistical data analysis and apply machine learning techniques Manage complete project delivery from data preparation to model evaluation Utilize Python programming and SQL for data manipulation and analysis Engage in ML Ops and model development workflows Create models that are accessible for business use Collaborate with teams using software development methodologies and version control Document processes and maintain project tracking tools such as Jira Stay updated with new technologies and apply problem-solving skills effectively Deliver production-ready solutions and facilitate knowledge sharing Requirements 9+ years of experience in software engineering, specializing in Data Science At least 1 year of relevant leadership experience Proficiency in statistical data analysis, machine learning, and NLP, with a clear understanding of practical applications and limitations Experience in developing and implementing AI solutions, including classification, clustering, anomaly detection, and NLP Expertise in complete project delivery, from data preparation to model building, evaluation, and visualization Proficiency in Python programming and SQL, with experience in production-level code and data analysis libraries Familiarity with ML Ops, model development workflows, and feature engineering techniques Capability in manipulating data and developing models accessible for business use, with experience in Azure AI Search Competence in software development methodologies, code versioning (e.g., GitLab), and project tracking tools (e.g., Jira) Enthusiasm for learning new technologies, with expertise in problem-solving and delivering production-ready solutions Fluency in UNIX command line Familiarity with Agile development practices Excellent communication skills in English, with a minimum proficiency level of B2+ Nice to have Knowledge of Cloud Computing Experience with Big Data tools Familiarity with visualization tools Proficiency in containerization tools
Posted 1 week ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a Senior Data Scientist to join our team and drive innovation by leveraging your expertise in statistical data analysis, machine learning, and NLP to create and deliver impactful AI solutions. As a Senior Data Scientist, you will work on challenging projects that require end-to-end involvement, from data preparation to model deployment, all while working collaboratively with cross-functional teams and delivering production-ready solutions. Responsibilities Develop, implement, and evaluate AI solutions, including classification, clustering, anomaly detection, and NLP Apply advanced statistical techniques and machine learning algorithms to solve complex business problems Utilize Python and SQL to write production-level code and perform comprehensive data analysis Implement model development workflows, including ML Ops, and feature engineering techniques Utilize Azure AI Search and other tools to make data and models accessible to stakeholders Collaborate with software development and project management teams, leveraging version control tools like GitLab and project tracking software like Jira Optimize data pipelines and model performance for real-world applications Communicate technical concepts effectively to both technical and non-technical audiences Stay updated on emerging technologies, applying a problem-solving mindset to integrate them into projects Ensure adherence to Agile development practices and maintain fluency in UNIX command line operations Requirements 4+ years of experience in Data Science Proficiency in statistical data analysis, machine learning, and NLP with practical applications and limitations Expertise in Python programming and SQL, with experience in data analysis libraries and production-level code Background in developing AI solutions, including classification, clustering, anomaly detection, or NLP Familiarity with ML Ops and feature engineering techniques, with hands-on experience in model workflows Flexibility to use tools like Azure AI Search to make models accessible for business use Competency in software development methodologies and code versioning tools such as GitLab Knowledge of project management tools such as Jira and Agile development practices Qualification in working with UNIX command line and problem-solving with innovative technologies B2 level of English or higher, with an emphasis on technical communication skills Nice to have Familiarity with Cloud Computing, Big Data tools, and/or containerization technologies Proficiency in data visualization tools for clear communication of insights
Posted 1 week ago
9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are looking for a Lead Data Scientist to join our collaborative team. You will play a key role in developing and implementing AI solutions across various applications, from statistical analysis to natural language processing. If you are passionate about leveraging data to create impactful solutions, we encourage you to apply. Responsibilities Develop and implement AI solutions including classification, clustering, and anomaly detection Conduct statistical data analysis and apply machine learning techniques Manage complete project delivery from data preparation to model evaluation Utilize Python programming and SQL for data manipulation and analysis Engage in ML Ops and model development workflows Create models that are accessible for business use Collaborate with teams using software development methodologies and version control Document processes and maintain project tracking tools such as Jira Stay updated with new technologies and apply problem-solving skills effectively Deliver production-ready solutions and facilitate knowledge sharing Requirements 9+ years of experience in software engineering, specializing in Data Science At least 1 year of relevant leadership experience Proficiency in statistical data analysis, machine learning, and NLP, with a clear understanding of practical applications and limitations Experience in developing and implementing AI solutions, including classification, clustering, anomaly detection, and NLP Expertise in complete project delivery, from data preparation to model building, evaluation, and visualization Proficiency in Python programming and SQL, with experience in production-level code and data analysis libraries Familiarity with ML Ops, model development workflows, and feature engineering techniques Capability in manipulating data and developing models accessible for business use, with experience in Azure AI Search Competence in software development methodologies, code versioning (e.g., GitLab), and project tracking tools (e.g., Jira) Enthusiasm for learning new technologies, with expertise in problem-solving and delivering production-ready solutions Fluency in UNIX command line Familiarity with Agile development practices Excellent communication skills in English, with a minimum proficiency level of B2+ Nice to have Knowledge of Cloud Computing Experience with Big Data tools Familiarity with visualization tools Proficiency in containerization tools
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Newton School of Technology is on a mission to transform technology education and bridge the employability gap. As India’s first impact university, we are committed to revolutionizing learning, empowering students, and shaping the future of the tech industry. Backed by renowned professionals and industry leaders, we aim to solve the employability challenge and create a lasting impact on society. We are currently looking for a Senior Data Scientist+Instructor to join our Computer Science Department. This is a full-time academic role focused on data mining, analytics, and teaching/mentoring students in core data science and engineering topics. Key Responsibilities Develop and deliver comprehensive and engaging lectures for the undergraduate "Data Mining", “Big Data”, and “Data Analytics” courses, covering the full syllabus from foundational concepts to advanced techniques. Instruct students on the complete data lifecycle, including data preprocessing, cleaning, transformation, and feature engineering. Teach the theory, implementation, and evaluation of a wide range of algorithms for Classification, Association rules mining, Clustering, and Anomaly Detection. Design and facilitate practical lab sessions and assignments that provide students with hands-on experience using modern data tools and software. Develop and grade assessments, including assignments, projects, and examinations, that effectively measure the Course Learning Objectives (CLOs). Mentor and guide students on projects, encouraging them to work with real-world or benchmark datasets (e.g., from Kaggle). Stay current with the latest advancements, research, and industry trends in data engineering and machine learning to ensure the curriculum remains relevant and cutting-edge. Contribute to the academic and research environment of the department and the university. Required Qualifications A Ph.D. (or a Master's degree with significant, relevant industry experience) in Computer Science, Data Science, Artificial Intelligence, or a closely related field. Demonstrable 3-10 years of expertise in the core concepts of data engineering and machine learning as outlined in the syllabus. Strong practical proficiency in Python and its data science ecosystem, specifically Scikit-learn, Pandas, NumPy, and visualization libraries (e.g., Matplotlib, Seaborn). Proven experience in teaching, preferably at the undergraduate level, with an ability to make complex topics accessible and engaging. Excellent communication and interpersonal skills. Preferred Qualifications A strong record of academic publications in reputable data mining, machine learning, or AI conferences/journals. Prior industry experience as a Data Scientist, Big Data Engineer, Machine Learning Engineer, or in a similar role. Experience with big data technologies (e.g., Spark, Hadoop) and/or deep learning frameworks (e.g., TensorFlow, PyTorch). Experience in mentoring student teams for data science competitions or hackathons. Perks & Benefits Competitive salary packages aligned with industry standards. Access to state-of-the-art labs and classroom facilities. To know more about us, feel free to explore our website: Newton School of Technology. We look forward to the possibility of having you join our academic team and help shape the future of tech education!
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a highly experienced Voice AI /ML Engineer to lead the design and deployment of real-time voice intelligence systems . This role focuses on ASR , TTS , speaker diarization , wake word detection , and building production-grade modular audio processing pipelines to power next-generation contact centre solutions , intelligent voice agents , and telecom-grade audio systems . You will work at the intersection of deep learning , streaming infrastructure , and speech/NLP technology , creating scalable, low-latency systems across diverse audio formats and real-world applications. Key Responsibilities: Voice & Audio Intelligence: Build, fine-tune, and deploy ASR models (e.g., Whisper , wav2vec2.0 , Conformer ) for real-time transcription. Develop and finetune high-quality TTS systems using VITS , Tacotron , FastSpeech for lifelike voice generation and cloning. Implement speaker diarization for segmenting and identifying speakers in multi-party conversations using embeddings (x-vectors/d-vectors) and clustering (AHC, VBx, spectral clustering). Design robust wake word detection models with ultra-low latency and high accuracy in noisy conditions. Real-Time Audio Streaming & Voice Agent Infrastructure: Architect bi-directional real-time audio streaming pipelines using WebSocket , gRPC , Twilio Media Streams , or WebRTC . Integrate voice AI models into live voice agent solutions , IVR automation , and AI contact center platforms . Optimize for latency , concurrency , and continuous audio streaming with context buffering and voice activity detection (VAD). Build scalable microservices to process, decode, encode, and stream audio across common codecs (e.g., PCM , Opus , μ-law , AAC , MP3 ) and containers (e.g., WAV , MP4 ). Deep Learning & NLP Architecture: Utilize transformers , encoder-decoder models , GANs , VAEs , and diffusion models , for speech and language tasks. Implement end-to-end pipelines including text normalization, G2P mapping, NLP intent extraction, and emotion/prosody control. Fine-tune pre-trained language models for integration with voice-based user interfaces. Modular System Development: Build reusable, plug-and-play modules for ASR , TTS , diarization , codecs , streaming inference , and data augmentation . Design APIs and interfaces for orchestrating voice tasks across multi-stage pipelines with format conversions and buffering. Develop performance benchmarks and optimize for CPU/GPU, memory footprint, and real-time constraints. Engineering & Deployment: Writing robust, modular, and efficient Python code Experience with Docker , Kubernetes , cloud deployment (AWS, Azure, GCP) Optimize models for real-time inference using ONNX , TorchScript , and CUDA , including quantization , context-aware inference , model caching . On device voice model deployment. Why join us? Impactful Work: Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment: Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employees. www.tanla.com
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Summary Position Summary Assistant Manager – Infrastructure SQL Services – Deloitte Support Services India Pvt. Ltd. The ITS Operations function is accountable for delivering all internal technology infrastructure – including Email, Skype, File Services and platforms underpinning SQL, SAP, Enterprise (IT) Security Services. It also provides technology that supports the service lines in delivering clients facing or client engagements as part of its client IT services Team Summary The Infrastructure SQL Services team are responsible for managing SQL infrastructure including databases, servers and clusters throughout their lifecycle. This role plays an important part in the SQL related aspects of designing, testing, operating and improving IT services. This is a large, enterprise SQL environment, underpinning many missions' critical applications. Interfacing business application owners, providing SQL support and guidance is a key element. As the IT function is spread over multiple geographic locations, you will be expected to communicate and collaborate effectively with remote colleagues. Responsibilities Administer, maintain, and implement SQL Server databases (on-premises and cloud-based) Oversee database performance tuning, query optimization, and troubleshooting for mission-critical systems Implement and manage high availability and disaster recovery solutions (e.g., Always On Availability Groups, clustering, replication). Develop, enforce, and monitor database security policies, including user access, encryption, and compliance with regulatory requirements. Automate database maintenance tasks and develop scripts for monitoring and reporting. Conduct root cause analysis for critical incidents and implement preventive solutions. Collaborate with architects, developers, and infrastructure teams to align database solutions with business needs. Maintain comprehensive documentation for database configurations, procedures, and standards. Respond to service outages which affect Deloitte’s business operation and reputation, including out of hours escalations as part of a 24 x 7 on-call rota Maintain the performance, availability and security of SQL services, with a focus on continuous service improvement Installing and managing SSIS packages and writing and deploying SSRS reports Proactive system \ platform availability checks Server performance management and capacity planning Troubleshooting and Break-fix (Incidents & Service requests) Documentation and cross-training of other team members Performs systematic and periodic application \ infrastructure availability check \ tasks Share knowledge of new solutions with UK and Swiss Security Operations teams Assist with client audits / MF Standards / ISO compliance and evidence gathering Essential In-depth knowledge and understanding of SQL working in a large-scale enterprise estate, including both on-premises and cloud hosted infrastructure In depth knowledge of SQL high availability techniques, specifically AlwaysOn Availability Groups and Failover Cluster Instances Experience with cloud database platforms (Azure SQL, AWS RDS, etc.). Experience with installing and managing SSIS (integration services) packages and writing and deploying SSRS (reporting services) reports Strong SQL performance tuning and troubleshooting skills Strong experience in SQL backup and recovery processes Fluent in T-SQL scripting Experience in server performance management and capacity planning Good knowledge of client/server architectures - this should primarily be centred upon, but not exclusively, the Microsoft suite of back-office products PowerShell basic scripting SolarWinds and SCOM monitoring A solid understanding of the ITIL framework Exceptional communications skills, both written and verbal Diplomatic and persuasive with an ability to handle difficult conversations and confidently manage stakeholders A strong track record of delivering continual service improvement Be able to communicate effectively, technical issues with technical and non-technical audience Able to work as part of a geographically separated team Desirable Database and server migration from on-premises architecture to cloud (Azure and AWS) ITIL Service Operations knowledge preferred (Event Management, Incident Management, Change Management, and Problem Management). Powershell advanced scripting Tools & Technology SQL Server 2017, 2019 and 2022 Azure/AWS (IaaS and PaaS) SSRS, SSIS T-SQL and PowerShell Scripting SolarWinds, SCOM monitoring RedGate SQL Monitor ServiceNow CyberArk- Password Management tool Technical Certifications (Must have) ITIL v3 or v4 Foundation Certification in SQL Server & Azure cloud Technology Technical Certifications (Good to have) DP-300 & AI-900 Certification Azure fundamentals certification (AZ-900) Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 307462
Posted 1 week ago
3.0 - 4.0 years
9 - 23 Lacs
Thiruvananthapuram
On-site
We are looking for a Senior Data Scientist with a strong foundation in machine learning, data analysis, and a growing expertise in LLMs and Gen AI. The ideal candidate will be passionate about uncovering insights from data, proposing impactful use cases, and building intelligent solutions that drive business value. Key Responsibilities: - Analyze structured and unstructured data to identify trends, patterns, and opportunities. - Propose and validate AI/ML use cases based on business data and stakeholder needs. - Build, evaluate, and deploy machine learning models for classification, regression, clustering, etc. - Work with LLMs and GenAI tools to prototype and integrate intelligent solutions (e.g., chatbots, summarization, content generation). - Collaborate with data engineers, product managers, and business teams to deliver end-to-end solutions. - Ensure data quality, model interpretability, and ethical AI practices. - Document experiments, share findings, and contribute to knowledge sharing within the team Required Skills & Qualifications: - Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or related field. - 3–4 years of hands-on experience in data science and machine learning. - Proficient in Python and ML libraries - Experience with data wrangling, feature engineering, and model evaluation. -Exposure to LLMs and GenAI tools (e.g., Hugging Face, LangChain, OpenAI APIs). - Familiarity with cloud platforms (AWS, GCP, or Azure) and version control (Git). - Strong communication and storytelling skills with a data-driven mindset. Note: Final Discussion Round will be F2F Job Type: Full-time Pay: ₹944,773.51 - ₹2,340,222.09 per year Work Location: In person
Posted 1 week ago
5.0 years
3 - 8 Lacs
Cochin
On-site
Minimum Required Experience : 5 years Full Time Skills Oracle 11G Apache Http Server Description Key Responsibilities: Architect, install, configure, and manage enterprise middleware platforms including Oracle WebLogic (11g/12c), SOA Suite, OBIEE, Apache HTTP Server, Tomcat, and JBoss EAP. Design and implement high availability (HA) and clustered environments for mission-critical applications. Optimize JVM memory settings, JDBC connection pools, and thread configurations to ensure application performance and stability. Integrate and configure middleware services with enterprise systems such as Active Directory (LDAP), and SSL-enabled environments. Contribute to architecture decisions and request flow designs for middleware applications ensuring scalability, security, and performance. Develop scripts and automation using WLST (Jython), Python, and Shell scripting for deployment, monitoring, and maintenance. Lead disaster recovery (DR) initiatives by building DR environments, ensuring parity with production, performing DR drills, and managing failover/failback activities. Apply and manage security patches across Fusion Middleware products, JBoss, WebLogic, and JDK environments. Participate in migration projects, upgrades, and performance tuning initiatives. Work closely with developers, infrastructure teams, and application support for issue resolution and system enhancements. Document system architecture, configuration changes, and standard operating procedures. Required Skills & Qualifications: 5+ years of experience in Middleware Administration and Architecture. Deep hands-on knowledge of SOA Suite, OBIEE, WebLogic (11g/12c), JBoss EAP, Tomcat, Apache. Strong understanding of clustering, load balancing, and high availability configurations. Experience with performance tuning, JVM optimization, JDBC, JMS, and security configurations. Scripting proficiency with WLST, Python, and Shell. Experience in building and managing disaster recovery setups. Knowledge of Oracle Cloud Infrastructure (OCI) with relevant certification (OCI Architect Associate preferred). Strong analytical and problem-solving skills with a proactive mindset. Familiarity with ITIL concepts and best practices. Preferred Qualifications: Oracle Cloud Infrastructure Architect Associate certification. ITIL Foundation certification is a plus Experience working in banking or financial services environments is a plus.
Posted 1 week ago
8.0 years
12 Lacs
India
On-site
Experience- 8+ years JD- We are seeking a skilled Snowflake Developer with 8+ years of experience in designing, developing, and optimizing Snowflake data solutions. The ideal candidate will have strong expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration. This role involves building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Key Responsibilities 1. Snowflake Development & Optimization Design and develop Snowflake databases, schemas, tables, and views following best practices. Write complex SQL queries, stored procedures, and UDFs for data transformation. Optimize query performance using clustering, partitioning, and materialized views. Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks). 2. Data Pipeline Development Build and maintain ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. Integrate Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). Develop CDC (Change Data Capture) and real-time data processing solutions. 3. Data Modeling & Warehousing Design star schema, snowflake schema, and data vault models in Snowflake. Implement data sharing, secure views, and dynamic data masking. Ensure data quality, consistency, and governance across Snowflake environments. 4. Performance Tuning & Troubleshooting Monitor and optimize Snowflake warehouse performance (scaling, caching, resource usage). Troubleshoot data pipeline failures, latency issues, and query bottlenecks. Work with DevOps teams to automate deployments and CI/CD pipelines. 5. Collaboration & Documentation Work closely with data analysts, BI teams, and business stakeholders to deliver data solutions. Document data flows, architecture, and technical specifications. Mentor junior developers on Snowflake best practices. Required Skills & Qualifications · 8+ years in database development, data warehousing, or ETL. · 4+ years of hands-on Snowflake development experience. · Strong SQL or Python skills for data processing. · Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). · Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). · Certifications: SnowPro Core Certification (preferred). Preferred Skills · Familiarity with data governance and metadata management. · Familiarity with DBT, Airflow, SSIS & IICS · Knowledge of CI/CD pipelines (Azure DevOps). Job Type: Full-time Pay: From ₹1,200,000.00 per year Schedule: Monday to Friday Application Question(s): How many years of total experience do you currently have? How many years of experience do you have in Snowflake development? What is your current CTC? What is your expected CTC? What is your notice period/ LWD? What is your current location? Are you comfortable attending L2 round face to face in Hyderabad?
Posted 1 week ago
5.0 years
0 Lacs
Kochi, Kerala, India
On-site
Key Responsibilities Architect, install, configure, and manage enterprise middleware platforms including Oracle WebLogic (11g/12c), SOA Suite, OBIEE, Apache HTTP Server, Tomcat, and JBoss EAP. Design and implement high availability (HA) and clustered environments for mission-critical applications. Optimize JVM memory settings, JDBC connection pools, and thread configurations to ensure application performance and stability. Integrate and configure middleware services with enterprise systems such as Active Directory (LDAP), and SSL-enabled environments. Contribute to architecture decisions and request flow designs for middleware applications ensuring scalability, security, and performance. Develop scripts and automation using WLST (Jython), Python, and Shell scripting for deployment, monitoring, and maintenance. Lead disaster recovery (DR) initiatives by building DR environments, ensuring parity with production, performing DR drills, and managing failover/failback activities. Apply and manage security patches across Fusion Middleware products, JBoss, WebLogic, and JDK environments. Participate in migration projects, upgrades, and performance tuning initiatives. Work closely with developers, infrastructure teams, and application support for issue resolution and system enhancements. Document system architecture, configuration changes, and standard operating procedures. Required Skills & Qualifications 5+ years of experience in Middleware Administration and Architecture. Deep hands-on knowledge of SOA Suite, OBIEE, WebLogic (11g/12c), JBoss EAP, Tomcat, Apache. Strong understanding of clustering, load balancing, and high availability configurations. Experience with performance tuning, JVM optimization, JDBC, JMS, and security configurations. Scripting proficiency with WLST, Python, and Shell. Experience in building and managing disaster recovery setups. Knowledge of Oracle Cloud Infrastructure (OCI) with relevant certification (OCI Architect Associate preferred). Strong analytical and problem-solving skills with a proactive mindset. Familiarity with ITIL concepts and best practices. Preferred Qualifications Oracle Cloud Infrastructure Architect Associate certification. ITIL Foundation certification is a plus Experience working in banking or financial services environments is a plus.
Posted 1 week ago
14.0 years
0 Lacs
Hyderābād
On-site
To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Software Engineering Job Details About Salesforce We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. About AI Developer Experience Engineering Hyperforce Developer Experience is a team dedicated to enhancing developer productivity and experience by leveraging generative AI. We're focused on revolutionizing software development by delivering best-in-class products and ensuring high customer trust at cloud scale. By providing cutting-edge AI-powered tools and solutions, we aim to exceed customer expectations and establish a strong reputation for excellence. We build highly Scalable, Secure, Reliable, Easy-to-use, and Intelligent services that are the foundation for all developers at Salesforce to innovate with high quality and great agility. We are looking for talented AI Software Engineer Leader to join our team and build a cutting-edge AI platform to enhance developer productivity, offering features like advanced code generation, intelligent code completion, automated testing and thoughout SDLC. The ideal candidate will have a strong foundation in machine learning, deep learning, and software engineering. You'll work on state-of-the-art AI models, optimize infrastructure for scalability, and collaborate with cross-functional teams to deliver innovative solutions. Join us in shaping the future of software development and making a significant impact on developer productivity. The key objectives of this team are, Lead a team of AI engineers in the development and implementation of AI solutions across SDLC Cutting-Edge AI: Continuously innovate and explore advanced AI techniques to improve SDLC processes. Stay updated with the latest AI technologies and trends to drive innovation. Accelerate Development: Reduce development time and effort through automated code generation and intelligent suggestions. Improve Code Quality: Enhance code accuracy, readability, and maintainability with AI-powered tools. Foster Innovation: Empower developers to explore new ideas and experiment with cutting-edge technologies. Streamline Workflows: Automate repetitive tasks and streamline the development process. Enhance Data-Driven Insights: Gather, refine, and analyze data to optimize AI models and measure their impact. Create User-Friendly Interfaces: Design intuitive and user-friendly interfaces for AI-powered tools. Advanced Code Generation: Empower developers with features like auto-completion, code generation, and unit test generation. Scalable Infrastructure: Build a robust infrastructure to handle massive workloads and support a growing user base. Responsibilities: Drive the vision of Transforming Engineers productivity by infusing AI technologies/tools into SDLC in collaboration with teams across geographies. Build and lead a team of engineers to deliver various AI engineering initiatives starting from local coding till prodution. Solid experience in building large scale AI Systems and distributed systems in Public Cloud (AWS or GCP) to reliably process billions of data points Proactively identify reliability & data quality problems and drive triaging and remediation process. Invest in continuous employee development of a highly technical team by mentoring and coaching engineers and technical leads in the team. Recruit and attract top talent. Drive execution and delivery by collaborating with cross functional teams, architects, product owners and engineers. Experience managing 2+ engineering teams. Experience in Design and implement algorithms for planning and generating code suggestions that meet user requirements. Evaluate AI model performance and optimize as needed for accuracy and efficiency. Develop and maintain data retrieval to fetch relevant code snippets, APIs, and documentation from various sources. Ensure data freshness, accuracy, and relevance to support code generation. Design and implement code generation algorithms using AI/ML techniques (e.g., sequence-to-sequence, language models) to produce high-quality code suggestions. Ensure generated code meets coding standards, best practices, and user preferences. Develop and maintain AI-powered features in the IDE for autocomplete, Chat leveraging Agentic workflows Hands-on working experience with Cursor, Windsurf, IntelliJ, Visual Studio Code, PYCharm, Eclipse, or equivalent IDE Plugin development across different programming languages Develop and build the Agentitc flows using Agent Platfrom and MCP Platform Experience in Infrastructure as a Code platforms Design, implement, and maintain robust metrics frameworks to capture key user interactions and product usage data within GenAI products. Collaborate with engineers to ensure efficient and accurate data collection across various GenAI systems. Analyze data and generate insights using statistical analysis and machine learning Eat, sleep, and breathe techniques in improving developer productivity. You have a knack of suggesting ideas to understand the developer needs and provide creative solutions to improve the developer productivity using AI tools Create and enforce processes that ensure quality of work, and drive engineering excellence Exhibit a customer-first mentality while making decisions, and be responsible and accountable for the output of the team Core Qualifications: BS, MS, or PhD in computer science or a related field, or equivalent work experience 14+ years of relevant experience in software development teams with 5+ years of experience managing teams At least 3+ years of experience in AI/ML engineering, with a focus on building the larges AI systems in building the Enterprise Knowledge, Code Search platform and Agent Platform Experience with large-scale AI/ML projects, including data preparation, model training, and deployment Proficiency in programming languages such as Python, Java, Typescript or Golang. Knowledge of NLP techniques, including language models, sequence-to-sequence models, and prompt engineering. Familiarity with code generation techniques, including program synthesis and code completion. Knowledge of software development principles, including design patterns, testing, and version control. Strong working knowledge in various cloud native services (Kubernetes, Block/Object storage, RDBMS, AI services etc..) in AWS or GCP public clouds. Strong analytical skills with expertise in statistical modeling and machine learning techniques (e.g., regression analysis, classification, clustering). Excellent communication skills, both written and verbal, to effectively collaborate with cross-functional teams (engineering, product management) Ability to work in a fast-paced environment, with a focus on delivering high-quality results under tight deadlines. Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form . Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary: The Specialist - Software Development (Artificial Intelligence) leads the design, development, and implementation of AI and machine learning solutions that address complex business challenges. This role requires expertise in AI algorithms, model development, and software engineering best practices. The individual will work closely with cross-functional teams to deliver intelligent systems that enhance business operations and decision-making. Key Responsibilities: • AI Solution Design & Development: o Lead the development of AI-driven applications and platforms using machine learning, deep learning, and NLP techniques. o Design, train, and optimize machine learning models using frameworks such as TensorFlow, PyTorch, Keras, or Scikit-learn. o Implement advanced algorithms for supervised and unsupervised learning, reinforcement learning, and computer vision. • Software Development & Integration: o Develop scalable AI models and integrate them into software applications using languages such as Python, R, or Java. o Build APIs and microservices to enable the deployment of AI models in cloud environments or on-premise systems. o Ensure that AI models are integrated with back-end systems, databases, and other business applications. • Data Management & Preprocessing: o Collaborate with data scientists and data engineers to gather, preprocess, and analyze large datasets. o Develop data pipelines to ensure the continuous availability of clean, structured data for model training and evaluation. o Implement feature engineering techniques to enhance the accuracy and performance of machine learning models. • AI Model Evaluation & Optimization: o Regularly evaluate AI models using performance metrics (e.g., precision, recall, F1 score) and fine-tune them to improve accuracy. o Perform hyperparameter tuning and cross-validation to ensure robust model performance. o Implement methods for model explainability and transparency (e.g., LIME, SHAP) to ensure trustworthiness in AI decisions. • AI Strategy & Leadership: o Collaborate with business stakeholders to identify opportunities for AI adoption and develop project roadmaps. o Provide technical leadership and mentorship to junior AI developers and data scientists, ensuring adherence to best practices in AI development. o Stay current with AI trends and research, introducing innovative techniques and tools to the team. • Security & Ethical Considerations: o Ensure AI models comply with ethical guidelines, including fairness, accountability, and transparency. o Implement security measures to protect sensitive data and AI models from vulnerabilities and attacks. o Monitor the performance of AI systems in production, ensuring they operate within ethical and legal boundaries. • Collaboration & Cross-Functional Support: o Collaborate with DevOps teams to ensure AI models are deployed efficiently in production environments. o Work closely with product managers, business analysts, and stakeholders to understand requirements and align AI solutions with business needs. o Participate in Agile ceremonies, including sprint planning and retrospectives, to ensure timely delivery of AI projects. • Continuous Improvement & Research: o Conduct research and stay updated with the latest developments in AI and machine learning technologies. o Evaluate new tools, libraries, and methodologies to improve the efficiency and accuracy of AI model development. o Drive continuous improvement initiatives to enhance the scalability and robustness of AI systems. Required Skills & Qualifications: • Bachelor’s degree in computer science, Data Science, Artificial Intelligence, or related field. • 5+ years of experience in software development with a strong focus on AI and machine learning. • Expertise in AI frameworks and libraries (e.g., TensorFlow, PyTorch, Keras, Scikit-learn). • Proficiency in programming languages such as Python, R, or Java, and familiarity with AI-related tools (e.g., Jupyter Notebooks, MLflow). • Strong knowledge of data science and machine learning algorithms, including regression, classification, clustering, and deep learning models. • Experience with cloud platforms (e.g., AWS, Google Cloud, Azure) for deploying AI models and managing data pipelines. • Strong understanding of data structures, databases, and large-scale data processing technologies (e.g., Hadoop, Spark). • Familiarity with Agile development methodologies and version control systems (Git). Preferred Qualifications: • Master’s or PhD in Artificial Intelligence, Machine Learning, or related field. • Experience with natural language processing (NLP) techniques (e.g., BERT, GPT, LSTM, Transformer models). • Knowledge of computer vision technologies (e.g., CNNs, OpenCV). • Familiarity with edge computing and deploying AI models on IoT devices. • Certification in AI/ML or cloud platforms (e.g., AWS Certified Machine Learning, Google Professional Data Engineer).
Posted 1 week ago
0.0 - 5.0 years
10 - 18 Lacs
Chennai
Hybrid
Qualification Masters related to Bioinformatics and Computational Biology Role & responsibilities - Combine and analyze multi-omics datasets to find common biological signals and actionable targets. - Classify disease and immune subtypes from both bulk and single-cell data using supervised and unsupervised machine learning methods. - Organize public datasets, evaluate analytical tools, and standardize workflows for clarity and reuse. - Conduct quality control, normalization, subtype and signature predictions across various sequencing platforms. - Create and maintain complete analysis pipelines for new technologies such as spatial transcriptomics - Search large consortia or databases for comparative studies and hypothesis generation. - Optimize resource use and job scheduling on Linux or HPC clusters; maintain version control and documentation. Preferred candidate profile - Programming & Scripting: R, Python, UNIX shell - Statistics & ML: Feature selection, clustering, predictive modeling, dimensionality reduction, survival analysis - Bulk & Single-Cell Analytics: RNA-seq, WES/WGS, methylation, proteomics, RPPA, scRNA-seq - Workflow & Reproducibility: Workflow managers, such as Nextflow or Snakemake, containers, Git - Data Resources: Major public repositories like TCGA and GEO, and internal cohort data management
Posted 1 week ago
5.0 years
3 - 9 Lacs
Noida
Remote
Req ID: 334405 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Engineering Sr. Engineer to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Job Description At NTT DATA, we know that with the right people on board, anything is possible. The quality, integrity, and commitment of our employees are key factors in our company's growth, market presence and our ability to help our clients stay a step ahead of the competition. By hiring, the best people and helping them grow both professionally and personally, we ensure a bright future for NTT DATA and for the people who work here. Preferred Experience " Ideal candidate has been supporting traditional server based relational databases (SQL Server and Postgresql) for over 5+ years out of which last 3+ years in public cloud environments (GCP and Azure) 5+ years of database experience in SQL Server 2008/2012/above versions 2+ years of database experience in postgresql all versions as secondary skill Knowledge of MySQL, Azure SQL,MI instances , GCP instances is a plus Strong planning, deployment, maintenance and troubleshooting experience in high-availability database environments (Always-On, Clustering, Mirroring, etc.). Expert in setting up health-checks and troubleshooting of SQL backups, restores and recovery models Able to work on On-Call rotations and provide 24*7 shift hours support at L2/L3 level Able to work independently in a project scenario and do POCs Experience in updating KB articles, Problem Management articles, and SOPs/runbooks Experience in google instances (SQL, Mysql,Postgresql) Experience incapacity planning, DR setup, project work (New environment setup) (SQL, Mysql,Postgresql) Passion for delivering timely and outstanding customer service Ability to work independently, with little or no direct supervision. Great written and oral communication skills with internal and external customers Strong ITIL foundation, Continual Service Improvement and Total Quality Management experiences Report weekly to management about abnormalities and critical issues; provide root cause analysis and recommendations; work with infrastructure team, application team and/or other teams for problem resolutions, following the escalation path if needed." Basic Qualifications " 5+ years of overall operational experience 3+ years of Azure/GCP experience as a cloud DBA (SQL and Postgresql) 3+ years of experience working in diverse cloud support database environments in a 24*7 production support model Experience with SSIS, SSRS, T-SQL Experience with python / powershell scripting - preferred Secondary skill in MySQL/PostgreSQL - preferred Ability to work independently, with little or no direct supervision. Ability to work in a rapidly changing environment Ability to multi-task and context-switch effectively between different activities and teams " Preferred Certifications Azure fundamentals certification (AZ-900) - REQUIRED Azure Database Certification (DP-300) - preferred AWS Certified Database Specialty - preferred MCTS, MCITP, OCP certifications a plus Google Cloud Engineer - REQUIRED B.Tech/BE/MCA in Information Technology degree or equivalent experience About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client's needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, https://us.nttdata.com/en/contact-us. NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.
Posted 1 week ago
7.0 years
3 - 9 Lacs
Noida
Remote
Req ID: 334411 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Technology - Lead Engineer to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Grade 8 - SQL Job Description At NTT DATA, we know that with the right people on board, anything is possible. The quality, integrity, and commitment of our employees are key factors in our company's growth, market presence and our ability to help our clients stay a step ahead of the competition. By hiring, the best people and helping them grow both professionally and personally, we ensure a bright future for NTT DATA and for the people who work here. Preferred Experience " Ideal candidate has been supporting traditional server based relational databases (SQL Server and Postgresql) for over 7+ years out of which last 3+ years in public cloud environments (GCP and Azure) 5+ years of database experience in SQL Server 2008/2012/above versions 2+ years of database experience in postgresql all versions as secondary skill Knowledge of MySQL, Azure SQL,MI instances , GCP instances is a plus Strong planning, deployment, maintenance and troubleshooting experience in high-availability database environments (Always-On, Clustering, Mirroring, etc.). Expert in setting up health-checks and troubleshooting of SQL backups, restores and recovery models Able to work on On-Call rotations and provide 24*7 shift hours support at L2/L3 level Able to work independently in a project scenario and do POCs Experience in updating KB articles, Problem Management articles, and SOPs/runbooks Experience in google instances (SQL, Mysql,Postgresql) Experience incapacity planning, DR setup, project work (New environment setup) (SQL, Mysql,Postgresql) Passion for delivering timely and outstanding customer service Ability to work independently, with little or no direct supervision. Great written and oral communication skills with internal and external customers Strong ITIL foundation, Continual Service Improvement and Total Quality Management experiences Report weekly to management about abnormalities and critical issues; provide root cause analysis and recommendations; work with infrastructure team, application team and/or other teams for problem resolutions, following the escalation path if needed." Basic Qualifications " 7+ years of overall operational experience 3+ years of Azure/GCP experience as a cloud DBA (SQL and Postgresql) 3+ years of experience working in diverse cloud support database environments in a 24*7 production support model Experience with SSIS, SSRS, T-SQL Experience with python / powershell scripting - preferred Secondary skill in Oracle/MySQL/PostgreSQL - preferred Ability to work independently, with little or no direct supervision. Ability to work in a rapidly changing environment Ability to multi-task and context-switch effectively between different activities and teams " Preferred Certifications Azure fundamentals certification (AZ-900) - REQUIRED Azure Database Certification (DP-300) - preferred AWS Certified Database Specialty - preferred MCTS, MCITP, OCP certifications a plus Google Cloud Engineer - REQUIRED B.Tech/BE/MCA in Information Technology degree or equivalent experience About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client's needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, https://us.nttdata.com/en/contact-us. NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.
Posted 1 week ago
4.0 - 7.0 years
0 Lacs
Noida
On-site
About Us - Attentive.ai is a leading provider of landscape and property management software powered by cutting-edge Artificial Intelligence (AI). Our software is designed to optimize workflows and help businesses scale up effortlessly in the outdoor services industry. Our Automeasure software caters to landscaping, snow removal, paving maintenance, and facilities maintenance businesses. We are also building Beam AI , an advanced AI engine focused on automating construction take-off and estimation workflows through deep AI. Beam AI is designed to extract intelligence from complex construction drawings, helping teams save time, reduce errors, and increase bid efficiency. Trusted by top US and Canadian sales teams, we are backed by renowned investors such as Sequoia Surge and InfoEdge Ventures." Position Description: As a Senior AI Research Engineer, you will be an integral part of our AI research team focused on transforming the construction industry through cutting-edge deep learning, computer vision and NLP technologies. You will contribute to the development of intelligent systems for automated construction take-off and estimation by working with unstructured data such as blueprint, drawings (including SVGs), and PDF documents. In this role, you will support the end-to-end lifecycle of AI-based solutions — from prototyping and experimentation to deployment in production. Your contributions will directly impact the scalability, accuracy, and efficiency of our products. Roles & Responsibilities Contribute to research and development initiatives focused on Computer Vision, Image Processing , and Deep Learning applied to construction-related data. Build and optimize models for extracting insights from documents such as blueprints, scanned PDFs, and SVG files . Contribute development of multi-modal models that integrate vision with language-based features (NLP/LLMs). Follow best data science and machine learning practices , including data-centric development, experiment tracking, model validation, and reproducibility. Collaborate with cross-functional teams including software engineers, ML researchers, and product teams to convert research ideas into real-world applications. Write clean, scalable, and production-ready code using Python and frameworks like PyTorch , TensorFlow , or HuggingFace . Stay updated with the latest research in computer vision and machine learning and evaluate applicability to construction industry challenges. Skills & Requirements 4-7 years of experience in applied AI/ML and research with a strong focus on Computer Vision and Deep Learning . Solid understanding of image processing , visual document understanding, and feature extraction from visual data. Familiarity with SVG graphics , NLP , or LLM-based architectures is a plus. Deep understanding of unsupervised learning techniques like clustering, dimensionality reduction , and representation learning. Proficiency in Python and ML frameworks such as PyTorch , OpenCV , TensorFlow , and HuggingFace Transformers . Hands-on experience with model optimization techniques (e.g., quantization , pruning , knowledge distillation ). - Good to have Experience with version control systems (e.g., Git ), project tracking tools (e.g., JIRA ), and cloud environments ( GCP , AWS , or Azure ). Familiarity with Docker , Kubernetes , and containerized ML deployment pipelines. Strong analytical and problem-solving skills with a passion for building innovative solutions; ability to rapidly prototype and iterate. Comfortable working in a fast-paced, agile, startup-like environment with excellent communication and collaboration skills. Why Work With Us? Be part of a visionary team building a first-of-its-kind AI solution for the construction industry . Exposure to real-world AI deployment and cutting-edge research in vision and multimodal learning. Culture that encourages ownership, innovation, and growth. Opportunities for fast learning, mentorship, and career progression.
Posted 1 week ago
5.0 years
0 Lacs
Noida
On-site
Position's Overview Required: The candidate would play the role of an AI/ML Senior Developer/Team Lead participating in Designing, Developing and validating AI/ML solutions levaraging Python / SQL for US Healthcare Customers. Position General Duties and Tasks: Participate in research, design, implementation, and optimization of Machine learning Models Help AI product managers and business stakeholders understand the potential and limitations of AI when planning new products Understanding of Revenue Cycle Management processes like Claims filing and adjudication Hands on experience in Python Build data ingest and data transformation platform Identify transfer learning opportunities and new training datasets Build AI models from scratch and help product managers and stakeholders understand results Analysing the ML algorithms that could be used to solve a given problem and ranking them by their success probability Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Defining validation strategies Defining the pre-processing or feature engineering to be done on a given dataset Training models and tuning their hyperparameters Analysing the errors of the model and designing strategies to overcome them Deploying models to production Create APIs and help business customers put results of your AI models into operations JD Education Bachelor's in computer sciences or similar. Masters preferred. Skills At least 5 years hands-on programming experience working on enterprise products Demonstrated proficiency in multiple programming languages with a strong foundation in a statistical platform such as Python, R, SAS, or MatLab. 3+ years project Experience in Deep Learning/Machine learning, Artificial Intelligence Experience in building AI models using algorithms of Classification & Clustering techniques Expertise in visualizing and manipulating big datasets Strong in MS SQL Acumen to take a complex problem and break it down to workable pieces, to code a solution Excellent verbal and written communication skills Ability to work in and define a fast pace and team focused environment Proven record of delivering and completing assigned projects and initiatives Ability to deploy large scale solutions to an enterprise estate Strong interpersonal skills Understanding of Revenue Cycle Management processes like Claims filing and adjudication is a plus
Posted 1 week ago
0 years
0 - 0 Lacs
Bareilly
On-site
Position: AI Intern Location: bareilly Key Responsibilities: Assist in developing AI models for personalized health recommendations , predictive analysis, and customer profiling. Support the creation and training of chatbots for consultations, feedback, and follow-ups. Analyze patient data, sales trends , and customer behavior using machine learning techniques. Work on Natural Language Processing (NLP) for symptom recognition and treatment suggestions. Help in building AI-powered dashboards for internal reporting and decision-making. Conduct research on AI trends and their potential application in the Ayurvedic wellness space. Required Skills: Strong understanding of Python and libraries like Pandas, NumPy, Scikit-learn. Exposure to AI/ML concepts like classification, clustering, and recommendation systems. Familiarity with NLP and basic chatbot tools or APIs (Dialogflow, Rasa, etc.). Basic knowledge of healthcare data and patient privacy principles. Strong problem-solving and logical thinking skills. Preferred Qualifications: Pursuing or completed B.Tech/BCA/M.Tech/MCA in Computer Science, AI, Data Science, or related fields. Prior experience or projects in healthtech, AI chatbots, or recommendation systems are a plus. Working knowledge of tools like Jupyter Notebook, GitHub, and REST APIs. What You’ll Gain: Real-world experience applying AI in the Ayurveda and healthcare domain . Exposure to end-to-end AI project development and deployment. Mentorship from tech leaders and wellness experts. Certificate of Completion + chance to convert to a full-time role. Job Type: Internship Contract length: 6 months Pay: ?5,000.00 - ?7,000.00 per month Schedule: Day shift Fixed shift Job Type: Internship Contract length: 6 months Pay: ₹5,000.00 - ₹7,000.00 per month Work Location: In person
Posted 1 week ago
3.0 years
1 - 2 Lacs
India
On-site
SEO Executive (3–4 Years Experience) A results-driven SEO Executive with 3.5 years of experience in ranking high-difficulty keywords (80+ KD) and managing WordPress websites and content-rich blogs. Skilled in on-page, off-page, and technical SEO, with hands-on expertise in content optimization, backlink building, and website health audits. Actively tracks daily news, PR articles, and brand mentions across selected companies to leverage trending topics and boost contextual content and link-building opportunities. Key Highlights: Ranked 80+ KD keywords like “AI tools for business” and “best investment platforms” on Google Page 1 Managed 10+ WordPress websites and optimized 200+ blog posts for search and user experience Achieved 200% YoY organic growth via content clustering, schema, and core web vitals improvements Built 100+ DA 50+ backlinks and strengthened internal linking structures Tools: GA4, GSC, Ahrefs, SEMrush, Screaming Frog, Surfer SEO, and Brand Monitoring Tools Objective: To drive high-impact SEO growth through strategic keyword targeting, content-led optimization, active brand monitoring, and seamless website management. Job Types: Full-time, Permanent Pay: ₹12,000.00 - ₹20,000.00 per month Benefits: Paid sick time Schedule: Day shift Supplemental Pay: Performance bonus Work Location: In person
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Key Responsibilities Design, develop, and maintain scalable data pipelines using AWS services and Snowflake. Build and manage data transformation workflows using dbt. Collaborate with data analysts, data scientists, and business stakeholders to deliver clean, reliable, and well-documented datasets. Optimize Snowflake performance through clustering, partitioning, and query tuning. Implement data quality checks, testing, and documentation within dbt. Automate data workflows and integrate with CI/CD pipelines. Ensure data governance, security, and compliance across cloud platforms. Required Skills & Qualifications Strong experience with Snowflake (data modeling, performance tuning, security). Proficiency in dbt (models, macros, testing, documentation). Solid understanding of AWS services such as S3, Lambda, Glue, and IAM. Experience with SQL and scripting languages (e.g., Python). Familiarity with version control systems (e.g., Git) and CI/CD tools. Strong problem-solving skills and attention to detail.
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Havas CSA is seeking a Data Scientist with 2-4 years of experience to contribute to advanced analytics and predictive modelling initiatives. The ideal candidate will combine strong statistical knowledge with practical business understanding to help develop and implement models that drive customer value and business growth. Responsibilities: Implement and maintain customer analytics models including CLTV prediction, propensity modelling, and churn prediction Support the development of customer segmentation models using clustering techniques and behavioural analysis Assist in building and maintaining survival models to analyze customer lifecycle events Work with large-scale datasets using BigQuery and Snowflake Develop and validate machine learning models using Python and cloud-based ML platforms, specifically BQ ML, ModelGarden and Amazon Bedrock Help transform model insights into actionable business recommendations Collaborate with analytics and activation teams to implement model outputs Present analyses to stakeholders in clear, actionable formats Qualifications: Bachelor's or master’s degree in Statistics, Mathematics, Computer Science, or related quantitative field 1-2 years’ experience in applied data science, preferably in marketing/retail Experience in developing and implementing machine learning models Strong understanding of statistical concepts and experimental design Ability to communicate technical concepts to non-technical audiences Familiarity with agile development methodologies Technical Skills: Advanced proficiency in: SQL and data warehouses (BigQuery, Snowflake) Python for statistical modeling Machine learning frameworks (scikit-learn, TensorFlow) Statistical analysis and hypothesis testing Data visualization tools (Matplotlib, Seaborn) Version control systems (Git) Understanding of Google Cloud Function and Cloud Run Experience with: Customer lifetime value modeling RFM analysis and customer segmentation Survival analysis and hazard modeling A/B testing and causal inference Feature engineering and selection Model validation and monitoring Cloud computing platforms (GCP/AWS/Azure) Key Projects & Deliverables Support development and maintenance of CLTV models Contribute to customer segmentation models incorporating behavioral and transactional data Implement survival models to predict customer churn Support the development of attribution models for marketing effectiveness Help develop recommendation engines for personalized customer experiences Assist in creating automated reporting and monitoring systems Soft Skills Strong analytical and problem-solving abilities Good communication and presentation skills Business acumen Collaborative team player Strong organizational skills Ability to translate business problems into analytical solutions Growth Opportunities Work on innovative data science projects for major brands Develop expertise in cutting-edge ML technologies Learn from experienced data science leaders Contribute to impactful analytical solutions Opportunity for career advancement We offer competitive compensation, comprehensive benefits, and the opportunity to work with leading brands while solving complex analytical challenges. Join our team to grow your career while making a significant impact through data-driven decision making. Contract Type : Permanent Here at Havas across the group we pride ourselves on being committed to offering equal opportunities to all potential employees and have zero tolerance for discrimination. We are an equal opportunity employer and welcome applicants irrespective of age, sex, race, ethnicity, disability and other factors that have no bearing on an individual’s ability to perform their job.
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description We are looking for a team member within strategic forecasting team based out of Pune. Robust forecasting is a priority for businesses, as the product potential has major implications to a wide range of disciplines. While forecasting of realistic potential can be arrived through both qualitative and quantitative methods, the challenge lies in selecting and deploying the right methodology. Thus, it is essential to have someone who understands and aspires to implement advanced analytics techniques such as Monte Carlo simulations, agent-based modeling, conjoint frameworks, NLP, clustering etc. within forecasting vertical. Primary Responsibilities Include, But Are Not Limited To Responsible for one/multiple therapy areas – demonstrating good pharmaceutical knowledge and project management capability Responsible for conceptualizing and delivering forecasts and analytical solutions, using both strategic as well as statistical techniques within area of responsibility Drive continuous enhancements to evolve the existing forecasting capabilities in terms of value-add, risk/ opportunity/uncertainty - identify and elevate key forecasting levers/insights/findings to inform decision making Collaborate across stakeholders – our Manufacturing Division, Human Health, Finance, Research, Country, and senior leadership – to build and robust assumptions, ensuring forecast accuracy improves over time to support decision making Drive innovation and automation to bring in robustness and efficiency gains in forecasting/process; incorporate best-in-class statistical forecasting methods to improve the accuracy Communicate effectively across stakeholders and proactively identify and resolve conflicts by engaging with relevant stakeholders Responsible for delivery of forecasts in a timely manner with allocated resources Determine the optimal method for forecasting, considering the context of the forecast, availability of data, the degree of accuracy desired, and the timeline available Contribute in evolving our offerings through innovation, standardization/ automation of various offerings, models and processes Qualification And Skills Engineering / Management / Pharma post-graduates with 3+ years of experience in the relevant roles; with 1-2 years of experience in pharmaceutical strategic forecasting or analytics Proven ability to work collaboratively across large and diverse functions and stakeholders Ability to manage ambiguous environments, and to adapt to changing needs of business Strong analytical skills; an aptitude for problem solving and strategic thinking Working knowledge of Monte Carlo simulations and range forecasting Ability to synthesize complex information into clear and actionable insights Proven ability to communicate effectively with stakeholders Solid understanding of pharmaceutical development, manufacturing, supply chain and marketing functions Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Analysis, Marketing, Numerical Analysis, Stakeholder Relationship Management, Strategic Planning, Waterfall Model Preferred Skills Job Posting End Date 05/7/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R337392
Posted 1 week ago
3.0 - 8.0 years
7 - 11 Lacs
Bengaluru
Work from Office
We are looking for Infrastructure Development Engineer to join our team. This is an exciting opportunity to work on Clarivate's Ex Libris project. This position will enable you to participate in highly professional Infrastructure engineering team that collaborates with the other engineering and operations teams to shape our emerging IT and Cloud management solutions using innovative technologies. About you At least 3 years of experience with system engineering in a cloud environment for tools, infrastructure and automation development Proven Development experience with PythonandBash scripting 3 years experience. Strong background in Linux/Unix Administration and System at scale (RedHat/CentOS/Oracle Linux) - 3 years experience. Experience with containers and k8s on the various levels: system administration and operations of multi-clusters 2 years experience. Experience with DevOps, and CI-CD concepts and tools 2 years experience. Understanding in systems architecture and infrastructure: Networking, Security, Storage and System - 3 years experience. It would be great if you also had... Experience with Configuration Management using Ansible/Terraform Experience with Log management tools using Elasticsearch Experience with virtualization platforms: Redhat KVM/VMware vSphere Experience with version control tools based on Git: GitHub/GitLab Experience with monitoring tools, and alerting systems based on Prometheus and Grafana Knowledge of Secrets Management tools: Hashicorp Vault What will you be doing in this role? Design and implement pragmatic, scalable solutions, favoring simplicity and flexibility to meet business needs. Research, evaluate, and recommend standards, tools, technologies, and services to support infrastructure strategy. Ensure application and infrastructure architectures are stable, highly available, secure, and compliant with internal policies and external regulations. Design, develop, and maintain automation solutions that support and optimize on-premises cloud infrastructure. Build, manage, and enhance CI/CD pipelines to support reliable, repeatable, and secure software delivery processes. Administer and support private cloud management tools, ensuring efficient orchestration, provisioning, and lifecycle management of infrastructure resources.
Posted 1 week ago
5.0 years
0 Lacs
India
On-site
Role & Responsibility: Experience working closely with other data scientists, data engineers software engineers, data managers and business partners. Can build scalable, re-usable, impactful data science products, usually containing statistical or machine learning algorithms, in collaboration with data engineers and software engineers. Can carry out data analyses to yield actionable business insights. Hands-on experience (typically 5+ years) designing, planning, prototyping, productionizing, maintaining and documenting reliable and scalable data science products in complex environments. Applied knowledge of data science tools and approaches across all data lifecycle stages. Thorough understanding of underlying mathematical foundations of statistics and machine learning. Development experience in one or more object-oriented programming languages (e.g. Python, Go, Java, C++) Advanced SQL knowledge. Knowledge of experimental design and analysis. Customer-centric and pragmatic mindset. Focus on value delivery and swift execution, while maintaining attention to detail. In addition to the above, the following skills also need to be checked: Classical ML (Supervised/Unsupervised learning things like regression, clustering, etc.) Deep Learning (if any area needed it's likely to be limited to fine-tuning a model, not creating one from scratch). Optimisation (linear, non-linear, etc.) LLM/RAGs
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough