Jobs
Interviews

1832 Keras Jobs - Page 14

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

25 - 30 Lacs

Hyderabad

Work from Office

Job Title - Retail Specialized Data Scientist Level 9 SnC GN Data & AI Management Level:09 - Consultant Location:Bangalore / Gurgaon / Mumbai / Chennai / Pune / Hyderabad / Kolkata Must have skills: A solid understanding of retail industry dynamics, including key performance indicators (KPIs) such as sales trends, customer segmentation, inventory turnover, and promotions. Strong ability to communicate complex data insights to non-technical stakeholders, including senior management, marketing, and operational teams. Meticulous in ensuring data quality, accuracy, and consistency when handling large, complex datasets. Gather and clean data from various retail sources, such as sales transactions, customer interactions, inventory management, website traffic, and marketing campaigns. Strong proficiency in Python for data manipulation, statistical analysis, and machine learning (libraries like Pandas, NumPy, Scikit-learn). Expertise in supervised and unsupervised learning algorithms Use advanced analytics to optimize pricing strategies based on market demand, competitor pricing, and customer price sensitivity. Good to have skills: Familiarity with big data processing platforms like Apache Spark, Hadoop, or cloud-based platforms such as AWS or Google Cloud for large-scale data processing. Experience with ETL (Extract, Transform, Load) processes and tools like Apache Airflow to automate data workflows. Familiarity with designing scalable and efficient data pipelines and architecture. Experience with tools like Tableau, Power BI, Matplotlib, and Seaborn to create meaningful visualizations that present data insights clearly. Job Summary : The Retail Specialized Data Scientist will play a pivotal role in utilizing advanced analytics, machine learning, and statistical modeling techniques to help our retail business make data-driven decisions. This individual will work closely with teams across marketing, product management, supply chain, and customer insights to drive business strategies and innovations. The ideal candidate should have experience in retail analytics and the ability to translate data into actionable insights. Roles & Responsibilities: Leverage Retail Knowledge:Utilize your deep understanding of the retail industry (merchandising, customer behavior, product lifecycle) to design AI solutions that address critical retail business needs. Gather and clean data from various retail sources, such as sales transactions, customer interactions, inventory management, website traffic, and marketing campaigns. Apply machine learning algorithms, such as classification, clustering, regression, and deep learning, to enhance predictive models. Use AI-driven techniques for personalization, demand forecasting, and fraud detection. Use advanced statistical methods help optimize existing use cases and build new products to serve new challenges and use cases. Stay updated on the latest trends in data science and retail technology. Collaborate with executives, product managers, and marketing teams to translate insights into business actions. Professional & Technical Skills : Strong analytical and statistical skills. Expertise in machine learning and AI. Experience with retail-specific datasets and KPIs. Proficiency in data visualization and reporting tools. Ability to work with large datasets and complex data structures. Strong communication skills to interact with both technical and non-technical stakeholders. A solid understanding of the retail business and consumer behavior. Programming Languages:Python, R, SQL, Scala Data Analysis Tools:Pandas, NumPy, Scikit-learn, TensorFlow, Keras Visualization Tools:Tableau, Power BI, Matplotlib, Seaborn Big Data Technologies:Hadoop, Spark, AWS, Google Cloud Databases:SQL, NoSQL (MongoDB, Cassandra) Additional Information: - Qualification Experience: Minimum 3 year(s) of experience is required Educational Qualification: Bachelors or Master's degree in Data Science, Statistics, Computer Science, Mathematics, or a related field.

Posted 2 weeks ago

Apply

8.0 - 10.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Roles & Responsibilities Day-to-day technical responsibilities Design solutions and implement generative AI models and algorithms, utilizing state-of-the-art algorithms such as GPT, VAE, and GANs. Work on integrating cloud with GenAI algorithms like ChatGPT, Bedrock, Bard, PaLM etc. Collaborate with cross-functional teams to define AI project requirements and objectives, ensuring alignment with overall business goals. Research the latest advancements in generative AI, machine learning, and deep learning techniques, and identify opportunities to integrate them into our products and services. Optimize existing generative AI models for improved performance, scalability, and efficiency. Develop and maintain AI pipelines, including data pre-processing, feature extraction, model training, and evaluation. Drive the team to build GenAI products and solutions, working in an agile methodology Stakeholder Interaction & Management Work with internal teams and get relevant data/inputs for preparing documents such as RFP responses, capability demonstrations, client presentations and collaterals, participating in the customer calls to sell solutions etc. Collaborate with relevant stakeholders/teams to get feedback and make revisions to ensure that the proposal stays relevant to the needs throughout various proposal stages. Ensures communication among all parties throughout the proposal process Identifies bottlenecks in the process escalating accordingly to higher level, as necessary, to ensure timetable and deliverables remain on track Reach out to the below mentioned internal teams during proposal creation. Effective Project Management Effectively and efficiently plan, organize, lead and control the delivery of the final solution/proposal/document Prepare a work plan that lists the tasks required to create the proposal, such as design, writing, editing, review and production Uses strong interpersonal, organizational, and time management skills to juggle multiple tasks with differing deadlines to consistently produce the document Collaborate and influence internal key stakeholders to get relevant data within the specified timeline to ensure relevant data in plugged into the solution Ensure promptness and compliance of proposals by creating and managing proposal calendars, compliance checklists, compliance matrices, trackers, etc. Oversees collection and completion of all proposal components (technical, cost, management, annexes) working in collaboration with internal teams, as well as partners. Follow up with relevant stakeholders/teams to get feedback and revisions and ensure that the proposal development stays on schedule. Update the sales team and other stakeholders on a regular basis on the progress of the proposal Tech skills you will have You have solid experience over the last few years developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs. You are proficient in Python and have experience with machine learning libraries and frameworks such as TensorFlow, PyTorch, or Keras. Familiarity with cloud-based platforms and services, such as AWS, GCP, or Azure. Prompt engineering for GenAI algorithms Experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face. Deployment of GenAI models Scrum/ Agile Why work at Wipro We pride ourselves on creating an inclusive workplace that provides equal opportunities to all persons regardless of their age, cultural background, sexual orientation, gender identity and expression, disability, veteran status, or anything else. If you only meet some of the requirements for this role, that's okay! We value a diverse range of backgrounds & ideas and believe this is fundamental for our future success. So, if you have the curiosity to learn and the willingness to teach what you know, we'd love to hear from you. Wipro has been globally recognized by several organizations for our commitment to sustainability, inclusion, and diversity. Social good is in our DNA, we believe in sustainability for the health of our planet, its inhabitants, and our business. For over 75 years we have operated as a purpose-driven company with an unwavering commitment to our customers and our communities. Energized by what we call the Spirit of Wipro, we commit ourselves to being a catalyst for change working to build a more just, equitable and sustainable society. Around 66% of Wipros economic ownership is pledged towards philanthropic purposes. All of our employees are expected to embody Wipros 5-Habits for Success which are: Being Respectful, Being Responsive, Always Communicating, Demonstrate Stewardship, Building Trust. Reinvent your world.We are building a modern Wipro. We are an end-to-end digital transformation partner with the > Applications from people with disabilities are explicitly welcome.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

9 - 13 Lacs

Gurugram

Work from Office

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client"s challenges of today and tomorrow.Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Your role As a Senior Data Scientist, you are expected to develop and implement Artificial Intelligence based solutions across various disciplines for the Intelligent Industry vertical of Capgemini Invent. You are expected to work as an individual contributor or along with a team to help design and develop ML/NLP models as per the requirement. You will work closely with the Product Owner, Systems Architect and other key stakeholders right from conceptualization till the implementation of the project. You should take ownership while understanding the client requirement, the data to be used, security & privacy needs and the infrastructure to be used for the development and implementation. The candidate will be responsible for executing data science projects independently to deliver business outcomes and is expected to demonstrate domain expertise, develop, and execute program plans and proactively solicit feedback from stakeholders to identify improvement actions. This role requires a strong technical background, excellent problem-solving skills, and the ability to work collaboratively with stakeholders from different functional and business teams. The role also requires the candidate to collaborate on ML asset creation and eager to learn and impart trainings to fellow data science professionals. We expect thought leadership from the candidate, especially on proposing to build a ML/NLP asset based on expected industry requirements. Experience in building Industry specific (e.g. Manufacturing, R&D, Supply Chain, Life Sciences etc), production ready AI Models using microservices and web-services is a plus. Programming Languages Python NumPy, SciPy, Pandas, MatPlotLib, Seaborne Databases RDBMS (MySQL, Oracle etc.), NoSQL Stores (HBase, Cassandra etc.) ML/DL Frameworks SciKitLearn, TensorFlow (Keras), PyTorch, Big data ML Frameworks - Spark (Spark-ML, Graph-X), H2O Cloud Azure/AWS/GCP Your Profile Predictive and Prescriptive modelling using Statistical and Machine Learning algorithms including but not limited to Time Series, Regression, Trees, Ensembles, Neural-Nets (Deep & Shallow CNN, LSTM, Transformers etc.). Experience with open-source OCR engines like Tesseract, Speech recognition, Computer Vision, face recognition, emotion detection etc. is a plus. Unsupervised learning Market Basket Analysis, Collaborative Filtering, Dimensionality Reduction, good understanding of common matrix decomposition approaches like SVD. Various Clustering approaches Hierarchical, Centroid-based, Density-based, Distribution-based, Graph-based clustering like Spectral. NLP Information Extraction, Similarity Matching, Sentiment Analysis, Text Clustering, Semantic Analysis, Document Summarization, Context Mapping/Understanding, Intent Classification, Word Embeddings, Vector Space Models, experience with libraries like NLTK, Spacy, Stanford Core-NLP is a plus. Usage of Transformers for NLP and experience with LLMs like (ChatGPT, Llama) and usage of RAGs (vector stores like LangChain & LangGraps), building Agentic AI applications. Model Deployment ML pipeline formation, data security and scrutiny check and ML-Ops for productionizing a built model on-premises and on cloud. Required Qualifications Masters degree in a quantitative field such as Mathematics, Statistics, Machine Learning, Computer Science or Engineering or a bachelors degree with relevant experience. Good experience in programming with languages such as Python/Java/Scala, SQL and experience with data visualization tools like Tableau or Power BI. Preferred Experience Experienced in Agile way of working, manage team effort and track through JIRA Experience in Proposal, RFP, RFQ and pitch creations and delivery to the big forum. Experience in POC, MVP, PoV and assets creations with innovative use cases Experience working in a consulting environment is highly desirable. Presupposition High Impact client communication The job may also entail sitting as well as working at a computer for extended periods of time. Candidates should be able to effectively communicate by telephone, email, and face to face. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

5 - 8 Lacs

Gurugram

Work from Office

Programming Languages: Python, Scala Machine Learning frameworks: Scikit Learn, Xgboost, Tensorflow, Keras, PyTorch, Spacy, Gensim, Stanford NLP, NLTK, Open CV, Spark MLlib, . Machine Learning Algorithms experience good to have Scheduling experience: Airflow Big Data/ Streaming/ Queues: Apache Spark, Apache Nifi, Apache Kafka, RabbitMQ any one of them Databases: MySQL, Mongo/Redis/Dynamo DB, Hive Source Control: GIT Cloud: AWS Build and Deployment: Jenkins, Docker, Docker Swarm, Kubernetes. BI tool: Quicksight(preferred) else any BI tool (Must have)

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Us Yubi stands for ubiquitous. But Yubi will also stand for transparency, collaboration, and the power of possibility. From being a disruptor in India’s debt market to marching towards global corporate markets from one product to one holistic product suite with seven products Yubi is the place to unleash potential. Freedom, not fear. Avenues, not roadblocks. Opportunity, not obstacles. About Yubi Yubi, formerly known as CredAvenue, is re-defining global debt markets by freeing the flow of finance between borrowers, lenders, and investors. We are the world's possibility platform for the discovery, investment, fulfillment, and collection of any debt solution. At Yubi, opportunities are plenty and we equip you with tools to seize it. In March 2022, we became India's fastest fintech and most impactful startup to join the unicorn club with a Series B fundraising round of $137 million. In 2020, we began our journey with a vision of transforming and deepening the global institutional debt market through technology. Our two-sided debt marketplace helps institutional and HNI investors find the widest network of corporate borrowers and debt products on one side and helps corporates to discover investors and access debt capital efficiently on the other side. Switching between platforms is easy, which means investors can lend, invest and trade bonds - all in one place. Our platforms shake up the traditional debt ecosystem and offer new ways of digital finance. Yubi Credit Marketplace - With the largest selection of lenders on one platform, our credit marketplace helps enterprises partner with lenders of their choice for any capital requirements. Yubi Invest - Fixed income securities platform for wealth managers & financial advisors to channel client investments in fixed income Financial Services Platform - Designed for financial institutions to manage co-lending partnerships & asset-based securitization Spocto - Debt recovery & risk mitigation platform Accumn - Dedicated SaaS solutions platform powered by Decision-grade data, Analytics, Pattern Identifications, Early Warning Signals and Predictions to Lenders, Investors and Business Enterprises So far, we have onboarded more than 17000 enterprises, 6200 investors, and lenders and facilitated debt volumes of over INR 1,40,000 crore. Backed by marquee investors like Insight Partners, B Capital Group, Dragoneer, Sequoia Capital, LightSpeed and Lightrock, we are the only-of-its-kind debt platform globally, revolutionizing the segment. At Yubi, People are at the core of the business and our most valuable assets. Yubi is constantly growing, with 1000+ like-minded individuals today who are changing the way people perceive debt. We are a fun bunch who are highly motivated and driven to create a purposeful impact. Come join the club to be a part of our epic growth story. Requirements Key Responsibilities: Lead and mentor a dynamic Data Science team in developing scalable, reusable tools and capabilities to advance machine learning models, specializing in computer vision, natural language processing, API development and Product building. Drive innovative solutions for complex CV-NLP challenges, including tasks like image classification, data extraction, text classification, and summarization, leveraging a diverse set of data inputs such as images, documents, and text. Collaborate with cross-functional teams, including DevOps and Data Engineering, to design and implement efficient ML pipelines that facilitate seamless model integration and deployment in production environments. Spearhead the optimization of the model development lifecycle, focusing on scalability for training and production scoring to manage significant data volumes and user traffic. Implement cutting-edge technologies and techniques to enhance model training throughput and response times. Required Experience & Expertise: 3+ years of experience in developing computer vision models and applications. Extensive knowledge and experience in Data Science and Machine Learning techniques, with a proven track record in leading and executing complex projects. Deep understanding of the entire ML model development lifecycle, including design, development, training, testing/evaluation, and deployment, with the ability to guide best practices. Expertise in writing high-quality, reusable code for various stages of model development, including training, testing, and deployment. Advanced proficiency in Python programming, with extensive experience in ML frameworks such as Scikit-learn, TensorFlow, and Keras and API development frameworks such as Django, Fast API. Demonstrated success in overcoming OCR challenges using advanced methodologies and libraries like Tesseract, Keras-OCR, EasyOCR, etc. Proven experience in architecting reusable APIs to integrate OCR capabilities across diverse applications and use cases. Proficiency with public cloud OCR services like AWS Textract, GCP Vision, and Document AI. History of integrating OCR solutions into production systems for efficient text extraction from various media, including images and PDFs. Comprehensive understanding of convolutional neural networks (CNNs) and hands-on experience with deep learning models, such as YOLO. Strong capability to prototype, evaluate, and implement state-of-the-art ML advancements, particularly in OCR and CV-NLP. Extensive experience in NLP tasks, such as Named Entity Recognition (NER), text classification, and on finetuning of Large Language Models (LLMs). This senior role is tailored for visionary professionals eager to push the boundaries of CV-NLP and drive impactful data-driven innovations using both well-established methods and the latest technological advancements. Benefits We are committed to creating a diverse environment and are proud to be an equal-opportunity employer. All qualified applicants receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, or age.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Navi Mumbai

Work from Office

Title: Lead Data Scientist (Python) Required Technical Skillset:Language : Python, PySpark Framework - Scikit-learn, TensorFlow, Keras, PyTorch, Libraries - NumPy, Pandas, Matplotlib, SciPy, Scikit-learn - DataFrame, Numpy, boto3 Database - Relational Database(Postgres), NoSQL Database (MongoDB) Cloud - AWS cloud platforms Other Tools - Jenkins, Bitbucket, JIRA, Confluence A machine learning engineer is responsible for designing, implementing, and maintaining machine learning systems and algorithms that allow computers to learn from and make predictions or decisions based on data. The role typically involves working with data scientists and software engineers to build and deploy machine learning models in a variety of applications such as natural language processing, computer vision, and recommendation systems. The key responsibilities of a machine learning engineer includes: Collecting and preprocessing large volumes of data, cleaning it up, and transforming it into a format that can be used by machine learning models. Model building which includes Designing and building machine learning models and algorithms using techniques such as supervised and unsupervised learning, deep learning, and reinforcement learning. Evaluating the model performance of machine learning models using metrics such as accuracy, precision, recall, and F1 score. Deploying machine learning models in production environments and integrating them into existing systems using CI/CD Pipelines, AWS Sagemaker Monitoring the performance of machine learning models and making adjustments as needed to improve their accuracy and efficiency. Working closely with software engineers, product managers and other stakeholders to ensure that machine learning models meet business requirements and deliver value to the organization. Requirements and Skills: Mathematics and Statistics: A strong foundation in mathematics and statistics is essential. They need to be familiar with linear algebra, calculus, probability, and statistics to understand the underlying principles of machine learning algorithms. Programming Skills: Should be proficient in programming languages such as Python. The candidate should be able to write efficient, scalable, and maintainable code to develop machine learning models and algorithms. Machine Learning Techniques: Should have a deep understanding of various machine learning techniques, such as supervised learning, unsupervised learning, and reinforcement learning and should also be familiar with different types of models such as decision trees, random forests, neural networks, and deep learning. Data Analysis and Visualization: Should be able to analyze and manipulate large data sets. The candidate should be familiar with data cleaning, transformation, and visualization techniques to identify patterns and insights in the data. Deep Learning Frameworks: Should be familiar with deep learning frameworks such as TensorFlow, PyTorch, and Keras and should be able to build and train deep neural networks for various applications. Big Data Technologies: A machine learning engineer should have experience working with big data technologies such as Hadoop, Spark, and NoSQL databases. They should be familiar with distributed computing and parallel processing to handle large data sets. Software Engineering: A machine learning engineer should have a good understanding of software engineering principles such as version control, testing, and debugging. They should be able to work with software development tools such as Git, Jenkins, and Docker. Communication and Collaboration: A machine learning engineer should have good communication and collaboration skills to work effectively with cross-functional teams such as data scientists, software developers, and business stakeholders.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: DevOps/MLOps Expert Location: Gurugram (On-Site) Employment Type: Full-Time Experience: 6 + years Qualification: B.Tech CSE About the Role We are seeking a highly skilled DevOps/MLOps Expert to join our rapidly growing AI-based startup building and deploying cutting-edge enterprise AI/ML solutions. This is a critical role that will shape our infrastructure, deployment pipelines, and scale our ML operations to serve large-scale enterprise clients. As our DevOps/MLOps Expert, you will be responsible for bridging the gap between our AI/ML development teams and production systems, ensuring seamless deployment, monitoring, and scaling of our ML-powered enterprise applications. You’ll work at the intersection of DevOps, Machine Learning, and Data Engineering in a fast-paced startup environment with enterprise-grade requirements. Key Responsibilities MLOps & Model Deployment • Design, implement, and maintain end-to-end ML pipelines from model development to production deployment • Build automated CI/CD pipelines specifically for ML models using tools like MLflow, Kubeflow, and custom solutions • Implement model versioning, experiment tracking, and model registry systems • Monitor model performance, detect drift, and implement automated retraining pipelines • Manage feature stores and data pipelines for real-time and batch inference • Build scalable ML infrastructure for high-volume data processing and analytics Enterprise Cloud Infrastructure & DevOps • Architect and manage cloud-native infrastructure with focus on scalability, security, and compliance • Implement Infrastructure as Code (IaC) using Terraform, CloudFormation, or Pulumi • Design and maintain Kubernetes clusters for containerized ML workloads • Build and optimize Docker containers for ML applications and microservices • Implement comprehensive monitoring, logging, and alerting systems • Manage secrets, security, and enterprise compliance requirements Data Engineering & Real-time Processing • Build and maintain large-scale data pipelines using Apache Airflow, Prefect, or similar tools • Implement real-time data processing and streaming architectures • Design data storage solutions for structured and unstructured data at scale • Implement data validation, quality checks, and lineage tracking • Manage data security, privacy, and enterprise compliance requirements • Optimize data processing for performance and cost efficiency Enterprise Platform Operations • Ensure high availability (99.9%+) and performance of enterprise-grade platforms • Implement auto-scaling solutions for variable ML workloads • Manage multi-tenant architecture and data isolation • Optimize resource utilization and cost management across environments • Implement disaster recovery and backup strategies • Build 24x7 monitoring and alerting systems for mission-critical applications Required Qualifications Experience & Education • 4-8 years of experience in DevOps/MLOps with at least 2+ years focused on enterprise ML systems • Bachelor’s/Master’s degree in Computer Science, Engineering, or related technical field • Proven experience with enterprise-grade platforms or large-scale SaaS applications • Experience with high-compliance environments and enterprise security requirements • Strong background in data-intensive applications and real-time processing systems Technical Skills Core MLOps Technologies • ML Frameworks: TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost • MLOps Tools: MLflow, Kubeflow, Metaflow, DVC, Weights & Biases • Model Serving: TensorFlow Serving, PyTorch TorchServe, Seldon Core, KFServing • Experiment Tracking: MLflow, Neptune.ai, Weights & Biases, Comet DevOps & Cloud Technologies • Cloud Platforms: AWS, Azure, or GCP with relevant certifications • Containerization: Docker, Kubernetes (CKA/CKAD preferred) • CI/CD: Jenkins, GitLab CI, GitHub Actions, CircleCI • IaC: Terraform, CloudFormation, Pulumi, Ansible • Monitoring: Prometheus, Grafana, ELK Stack, Datadog, New Relic Programming & Scripting • Python (advanced) - primary language for ML operations and automation • Bash/Shell scripting for automation and system administration • YAML/JSON for configuration management and APIs • SQL for data operations and analytics • Basic understanding of Go or Java (advantage) Data Technologies • Data Pipeline Tools: Apache Airflow, Prefect, Dagster, Apache NiFi • Streaming & Real-time: Apache Kafka, Apache Spark, Apache Flink, Redis • Databases: PostgreSQL, MongoDB, Elasticsearch, ClickHouse • Data Warehousing: Snowflake, BigQuery, Redshift, Databricks • Data Versioning: DVC, LakeFS, Pachyderm Preferred Qualifications Advanced Technical Skills • Enterprise Security: Experience with enterprise security frameworks, compliance (SOC2, ISO27001) • High-scale Processing: Experience with petabyte-scale data processing and real-time analytics • Performance Optimization: Advanced system optimization, distributed computing, caching strategies • API Development: REST/GraphQL APIs, microservices architecture, API gateways Enterprise & Domain Experience • Previous experience with enterprise clients or B2B SaaS platforms • Experience with compliance-heavy industries (finance, healthcare, government) • Understanding of data privacy regulations (GDPR, SOX, HIPAA) • Experience with multi-tenant enterprise architectures Leadership & Collaboration • Experience mentoring junior engineers and technical team leadership • Strong collaboration with data science teams, product managers, and enterprise clients • Experience with agile methodologies and enterprise project management • Understanding of business metrics, SLAs, and enterprise ROI Growth Opportunities • Career Path: Clear progression to Lead DevOps Engineer or Head of Infrastructure • Technical Growth: Work with cutting-edge enterprise AI/ML technologies • Leadership: Opportunity to build and lead the DevOps/Infrastructure team • Industry Exposure: Work with Government & MNCs enterprise clients and cutting-edge technology stacks Success Metrics & KPIs Technical KPIs • System Uptime: Maintain 99.9%+ availability for enterprise clients • Deployment Frequency: Enable daily deployments with zero downtime • Performance: Ensure optimal response times and system performance • Cost Optimization: Achieve 20-30% annual infrastructure cost reduction • Security: Zero security incidents and full compliance adherence Business Impact • Time to Market: Reduce deployment cycles and improve development velocity • Client Satisfaction: Maintain 95%+ enterprise client satisfaction scores • Team Productivity: Improve engineering team efficiency by 40%+ • Scalability: Support rapid client base growth without infrastructure constraints Why Join Us Be part of a forward-thinking, innovation-driven company with a strong engineering culture. Influence high-impact architectural decisions that shape mission-critical systems. Work with cutting-edge technologies and a passionate team of professionals. Competitive compensation, flexible working environment, and continuous learning opportunities. How to Apply Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Work on the ML side of product portfolio, in collaboration with project and product managers: automate and optimize data pipelines, improve models used in production to be more accurate or more efficient, etc. Use and develop machine learning and deep learning algorithms to solve applied problems in Computer Vision discipline.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

PwC AC is hiring for Data scientist Apply and get a chance to work with one of the Big4 companies #PwC AC. Job Tit le : Data scientist Years of Experienc e: 3-7 years Shift Timin gs: 11AM-8PM Qualificati on: Graduate and above(Full time) About PwC CTIO – AI Engineering PwC’s Commercial Technology and Innovation Office (CTIO) is at the forefront of emerging technology, focused on building transformative AI-powered products and driving enterprise innovation. The AI Engineering team within CTIO is dedicated to researching, developing, and operationalizing cutting-edge technologies such as Generative AI, Large Language Models (LLMs), AI Agents, and more. Our mission is to continuously explore what's next—enabling business transformation through scalable AI/ML solutions while remaining grounded in research, experimentation, and engineering excellence. Role Overview We are seeking a Senior Associate – Data Science/ML/DL/GenAI to join our high-impact, entrepreneurial team. This individual will play a key role in designing and delivering scalable AI applications, conducting applied research in GenAI and deep learning, and contributing to the team’s innovation agenda. This is a hands-on, technical role ideal for professionals passionate about AI-driven transformation. Key Responsibilities Design, develop, and deploy machine learning, deep learning, and Generative AI solutions tailored to business use cases. Build scalable pipelines using Python (and frameworks such as Flask/FastAPI) to operationalize data science models in production environments. Prototype and implement solutions using state-of-the-art LLM frameworks such as LangChain, LlamaIndex, LangGraph, or similar. Also developing applications in streamlit/chainlit for demo purposes. Design advanced prompts and develop agentic LLM applications that autonomously interact with tools and APIs. Fine-tune and pre-train LLMs (HuggingFace and similar libraries) to align with business objectives. Collaborate in a cross-functional setup with ML engineers, architects, and product teams to co-develop AI solutions. Conduct R&D in NLP, CV, and multi-modal tasks, and evaluate model performance with production-grade metrics. Stay current with AI research and industry trends; continuously upskill to integrate the latest tools and methods into the team’s work. Required Skills & Experience 3 to 7 years of experience in Data Science/ML/AI roles. Bachelor’s degree in Computer Science, Engineering, or equivalent technical discipline (BE/BTech/MCA). Proficiency in Python and related data science libraries: Pandas, NumPy, SciPy, Scikit-learn, TensorFlow, PyTorch, Keras, etc. Hands-on experience with Generative AI , including prompt engineering, LLM fine-tuning, and deployment. Experience with Agentic LLMs and task orchestration using tools like LangGraph or AutoGPT-like flows. Strong knowledge of NLP techniques, transformer architectures, and text analysis. Proven experience working with cloud platforms (preferably Azure; AWS/GCP also considered). Understanding of production-level AI systems including CI/CD, model monitoring, and cloud-native architecture. (Need not develop from scratch) Familiarity with ML algorithms: XGBoost, GBM, k-NN, SVM, Decision Forests, Naive Bayes, Neural Networks, etc. Exposure to deploying AI models via APIs and integration into larger data ecosystems. Strong understanding of model operationalization and lifecycle management. Good to Have Experience with Docker, Kubernetes, and containerized deployments for ML workloads. Use of MLOps tooling and pipelines (e.g., MLflow, Azure ML, SageMaker, etc.). Experience in full-stack AI applications, including visualization (e.g., PowerBI, D3.js). Demonstrated track record of delivering AI-driven solutions as part of large-scale systems. Soft Skills & Team Expectations Strong written and verbal communication; able to explain complex models to business stakeholders. Ability to independently document work, manage requirements, and self-drive technical discovery. Desire to innovate, improve, and automate existing processes and solutions. Active contributor to team knowledge sharing, technical forums, and innovation drives. Strong interpersonal skills to build relationships across cross-functional teams. A mindset of continuous learning and technical curiosity. Preferred Certifications (at least two are preferred) Certifications in Machine Learning, Deep Learning, or Natural Language Processing. Python programming certifications (e.g., PCEP/PCAP). Cloud certifications (Azure/AWS/GCP) such as Azure AI Engineer, AWS ML Specialty, etc. Why Join PwC CTIO? Be part of a mission-driven AI innovation team tackling industry-wide transformation challenges. Gain exposure to bleeding-edge GenAI research, rapid prototyping, and product development. Contribute to a diverse portfolio of AI solutions spanning pharma, finance, and core business domains. Operate in a startup-like environment within the safety and structure of a global enterprise. Accelerate your career as a deep tech leader in an AI-first future.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

22 - 25 Lacs

India

On-site

TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 22-25 LPA Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Work Location: In person

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing As a Principal (AI Scientist) you will own and perform specific analytics tasks and lead and drive end to end solutions for AI/ML use cases Designing and building scalable machine learning models to meet the needs of given Business engagement. Provide technical thought leadership on model architecture, delivery, monitoring, measurement and model lifecycle best practices. Working in collaborative environment with global teams to drive solutioning of business problems. Developing end to end analytical solutions, and articulating insights to leadership. Provide data-driven recommendations to business by clearly articulating complex modeling concepts through generation and delivery of presentations. Analyzing and model both structured and unstructured data from a number of distributed client and publicly available sources. Assisting with the mentorship and development of Junior members. Drive team towards solutions. Assisting in growing data science practice in Verizon, by meeting business goals through client prospecting, responding to model POC, identifying and closing opportunities within identified Insights, writing white papers, exploring new tools and defining best practices. What We’re Looking For… You have strong analytical and coding skills, eager to work in a collaborative environment with global teams to drive AI/ML application in business problems, develop end to end AI driven solutions and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and are able to interact with various stakeholders and global cross functional teams to implement data science driven business solutions. You take pride in your role as a data scientist and evangelist and enjoy adding to the systems, concepts and models that enrich the practice. You’ll Need To Have Bachelor’s degree or four or more years of work experience. Six or more years of relevant work experience. Experience in implementing production use cases in the area of time-series forecasting and optimization. Experience in building large scale time-series forecasting solutions with proven business benefits. Experience with Deep Learning and application of transformer based architecture for various NLP, image or other use cases. Experience with diverse inputs, including (but not limited to) text, clickstream and sequential data. Working knowledge of packages like Pytorch, Keras and Tensorflow. Experience in machine learning, deep learning, and statistical modeling techniques. Experience in programming skills in Python or other related languages. Experience with natural language processing, computer vision, or audio processing. Experience in designing and implementing forecasting and optimizing algorithms, demonstrated through publications, projects, or industry experience. Even better if you have one or more of the following: Master's degree. Experience in writing and publishing original white papers in data science. Experience of working in AI projects in telecom domain. Experience in working on cloud platforms like GCP Problem-solving skills and ability to work on complex, open-ended research problems. Excellent communication and collaboration skills, with the ability to work effectively in cross-functional teams. Self-motivated, proactive, and able to manage multiple tasks and priorities in a fast-paced environment. If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. #AI&D Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Role Proficiency: Leverage expertise in a technology area (e.g. Infromatica Transformation Terradata data warehouse Hadoop Analytics) Responsible for Architecture for a small/mid-size projects. Outcomes Implement either data extract and transformation a data warehouse (ETL Data Extracts Data Load Logic Mapping Work Flows stored procedures data warehouse) data analysis solution data reporting solutions or cloud data tools in any one of the cloud providers(AWS/AZURE/GCP) Understand business workflows and related data flows. Develop design for data acquisitions and data transformation or data modelling; applying business intelligence on data or design data fetching and dashboards Design information structure work-and dataflow navigation. Define backup recovery and security specifications Enforce and maintain naming standards and data dictionary for data models Provide or guide team to perform estimates Help team to develop proof of concepts (POC) and solution relevant to customer problems. Able to trouble shoot problems while developing POCs Architect/Big Data Speciality Certification in (AWS/AZURE/GCP/General for example Coursera or similar learning platform/Any ML) Measures Of Outcomes Percentage of billable time spent in a year for developing and implementing data transformation or data storage Number of best practices documented in any new tool and technology emerging in the market Number of associates trained on the data service practice Outputs Expected Strategy & Planning: Create or contribute short-term tactical solutions to achieve long-term objectives and an overall data management roadmap Implement methods and procedures for tracking data quality completeness redundancy and improvement Ensure that data strategies and architectures meet regulatory compliance requirements Begin engaging external stakeholders including standards organizations regulatory bodies operators and scientific research communities or attend conferences with respect to data in cloud Operational Management Help Architects to establish governance stewardship and frameworks for managing data across the organization Provide support in implementing the appropriate tools software applications and systems to support data technology goals Collaborate with project managers and business teams for all projects involving enterprise data Analyse data-related issues with systems integration compatibility and multi-platform integration Project Control And Review Provide advice to teams facing complex technical issues in the course of project delivery Define and measure project and program specific architectural and technology quality metrics Knowledge Management & Capability Development Publish and maintain a repository of solutions best practices and standards and other knowledge articles for data management Conduct and facilitate knowledge sharing and learning sessions across the team Gain industry standard certifications on technology or area of expertise Support technical skill building (including hiring and training) for the team based on inputs from project manager /RTE’s Mentor new members in the team in technical areas Gain and cultivate domain expertise to provide best and optimized solution to customer (delivery) Requirement Gathering And Analysis Work with customer business owners and other teams to collect analyze and understand the requirements including NFRs/define NFRs Analyze gaps/ trade-offs based on current system context and industry practices; clarify the requirements by working with the customer Define the systems and sub-systems that define the programs People Management Set goals and manage performance of team engineers Provide career guidance to technical specialists and mentor them Alliance Management Identify alliance partners based on the understanding of service offerings and client requirements In collaboration with Architect create a compelling business case around the offerings Conduct beta testing of the offerings and relevance to program Technology Consulting In collaboration with Architects II and III analyze the application and technology landscapers process and tolls to arrive at the architecture options best fit for the client program Analyze Cost Vs Benefits of solution options Support Architects II and III to create a technology/ architecture roadmap for the client Define Architecture strategy for the program Innovation And Thought Leadership Participate in internal and external forums (seminars paper presentation etc) Understand clients existing business at the program level and explore new avenues to save cost and bring process efficiency Identify business opportunities to create reusable components/accelerators and reuse existing components and best practices Project Management Support Assist the PM/Scrum Master/Program Manager to identify technical risks and come-up with mitigation strategies Stakeholder Management Monitor the concerns of internal stakeholders like Product Managers & RTE’s and external stakeholders like client architects on Architecture aspects. Follow through on commitments to achieve timely resolution of issues Conduct initiatives to meet client expectations Work to expand professional network in the client organization at team and program levels New Service Design Identify potential opportunities for new service offerings based on customer voice/ partner inputs Conduct beta testing / POC as applicable Develop collaterals guides for GTM Skill Examples Use data services knowledge creating POC to meet a business requirements; contextualize the solution to the industry under guidance of Architects Use technology knowledge to create Proof of Concept (POC) / (reusable) assets under the guidance of the specialist. Apply best practices in own area of work helping with performance troubleshooting and other complex troubleshooting. Define decide and defend the technology choices made review solution under guidance Use knowledge of technology t rends to provide inputs on potential areas of opportunity for UST Use independent knowledge of Design Patterns Tools and Principles to create high level design for the given requirements. Evaluate multiple design options and choose the appropriate options for best possible trade-offs. Conduct knowledge sessions to enhance team's design capabilities. Review the low and high level design created by Specialists for efficiency (consumption of hardware memory and memory leaks etc.) Use knowledge of Software Development Process Tools & Techniques to identify and assess incremental improvements for software development process methodology and tools. Take technical responsibility for all stages in the software development process. Conduct optimal coding with clear understanding of memory leakage and related impact. Implement global standards and guidelines relevant to programming and development come up with 'points of view' and new technological ideas Use knowledge of Project Management & Agile Tools and Techniques to support plan and manage medium size projects/programs as defined within UST; identifying risks and mitigation strategies Use knowledge of Project Metrics to understand relevance in project. Collect and collate project metrics and share with the relevant stakeholders Use knowledge of Estimation and Resource Planning to create estimate and plan resources for specific modules or small projects with detailed requirements or user stories in place Strong proficiencies in understanding data workflows and dataflow Attention to details High analytical capabilities Knowledge Examples Data visualization Data migration RDMSs (relational database management systems SQL Hadoop technologies like MapReduce Hive and Pig. Programming languages especially Python and Java Operating systems like UNIX and MS Windows. Backup/archival software. Additional Comments AI Architect Role Summary: Hands-on AI Architect with strong expertise in Deep Learning, Generative AI, and real-world AI/ML systems. The role involves leading the architecture, development, and deployment of AI agent-based solutions, supporting initiatives such as intelligent automation, anomaly detection, and GenAI-powered assistants across enterprise operations and engineering. This is a hands-on role ideal for someone who thrives in fast-paced environments, is passionate about AI innovations, and can adapt across multiple opportunities based on business priorities. Key Responsibilities: Design and architect AI-based solutions including multi-agent GenAI systems using LLMs and RAG pipelines. Build POCs, prototypes, and production-grade AI components for operations, support automation, and intelligent assistants. Lead end-to-end development of AI agents for use cases such as triage, RCA automation, and predictive analytics. Leverage GenAI (LLMs) and Time Series models to drive intelligent observability and performance management. Work closely with product, engineering, and operations teams to align solutions with domain and customer needs. Own model lifecycle from experimentation to deployment using modern MLOps and LLMOps practices. Ensure scalable, secure, and cost-efficient implementation across AWS and Azure cloud environments. Key Skills & Technology Areas: AI/ML Expertise: 8+ years in AI/ML, with hands-on experience in deep learning, model deployment, and GenAI. LLMs & Frameworks: GPT-3+, Claude, LLAMA3, LangChain, LangGraph, Transformers (BERT, T5), RAG pipelines, LLMOps. Programming: Python (advanced), Keras, PyTorch, Pandas, FastAPI, Celery (for agent orchestration), Redis. Modeling & Analytics: Time Series Forecasting, Predictive Modeling, Synthetic Data Generation. Data & Storage: ChromaDB, Pinecone, FAISS, DynamoDB, PostgreSQL, Azure Synapse, Azure Data Factory. Cloud & Tools: o AWS (Bedrock, SageMaker, Lambda), o Azure (Azure ML, Azure Databricks, Synapse), o GCP (Vertex AI – optional) Observability Integration: Splunk, ELK Stack, Prometheus. DevOps/MLOps: Docker, GitHub Actions, Kubernetes, CI/CD pipelines, model monitoring & versioning. Architectural Patterns: Microservices, Event-Driven Architecture, Multi-Agent Systems, API-first Design. Other Requirements: Proven ability to work independently and collaboratively in agile, innovation-driven teams. Strong problem-solving mindset and product-oriented thinking. Excellent communication and technical storytelling skills. Flexibility to work across multiple opportunities based on business priorities. Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. ________________________________________ ________________________________________ ________________________________________ Skills python,pandas,AIML,GENAI

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Mohali district, India

On-site

Job Title: DevOps/MLOps Expert Location: Mohali (On-Site) Employment Type: Full-Time Experience: 6 + years Qualification: B.Tech CSE About the Role We are seeking a highly skilled DevOps/MLOps Expert to join our rapidly growing AI-based startup building and deploying cutting-edge enterprise AI/ML solutions. This is a critical role that will shape our infrastructure, deployment pipelines, and scale our ML operations to serve large-scale enterprise clients. As our DevOps/MLOps Expert , you will be responsible for bridging the gap between our AI/ML development teams and production systems, ensuring seamless deployment, monitoring, and scaling of our ML-powered enterprise applications. You’ll work at the intersection of DevOps, Machine Learning, and Data Engineering in a fast-paced startup environment with enterprise-grade requirements. Key Responsibilities MLOps & Model Deployment • Design, implement, and maintain end-to-end ML pipelines from model development to production deployment • Build automated CI/CD pipelines specifically for ML models using tools like MLflow, Kubeflow, and custom solutions • Implement model versioning, experiment tracking, and model registry systems • Monitor model performance, detect drift, and implement automated retraining pipelines • Manage feature stores and data pipelines for real-time and batch inference • Build scalable ML infrastructure for high-volume data processing and analytics Enterprise Cloud Infrastructure & DevOps • Architect and manage cloud-native infrastructure with focus on scalability, security, and compliance • Implement Infrastructure as Code (IaC) using Terraform , CloudFormation , or Pulumi • Design and maintain Kubernetes clusters for containerized ML workloads • Build and optimize Docker containers for ML applications and microservices • Implement comprehensive monitoring, logging, and alerting systems • Manage secrets, security, and enterprise compliance requirements Data Engineering & Real-time Processing • Build and maintain large-scale data pipelines using Apache Airflow , Prefect , or similar tools • Implement real-time data processing and streaming architectures • Design data storage solutions for structured and unstructured data at scale • Implement data validation, quality checks, and lineage tracking • Manage data security, privacy, and enterprise compliance requirements • Optimize data processing for performance and cost efficiency Enterprise Platform Operations • Ensure high availability (99.9%+) and performance of enterprise-grade platforms • Implement auto-scaling solutions for variable ML workloads • Manage multi-tenant architecture and data isolation • Optimize resource utilization and cost management across environments • Implement disaster recovery and backup strategies • Build 24x7 monitoring and alerting systems for mission-critical applications Required Qualifications Experience & Education • 4-8 years of experience in DevOps/MLOps with at least 2+ years focused on enterprise ML systems • Bachelor’s/Master’s degree in Computer Science, Engineering, or related technical field • Proven experience with enterprise-grade platforms or large-scale SaaS applications • Experience with high-compliance environments and enterprise security requirements • Strong background in data-intensive applications and real-time processing systems Technical Skills Core MLOps Technologies • ML Frameworks : TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost • MLOps Tools : MLflow, Kubeflow, Metaflow, DVC, Weights & Biases • Model Serving : TensorFlow Serving, PyTorch TorchServe, Seldon Core, KFServing • Experiment Tracking : MLflow, Neptune.ai, Weights & Biases, Comet DevOps & Cloud Technologies • Cloud Platforms : AWS, Azure, or GCP with relevant certifications • Containerization : Docker, Kubernetes (CKA/CKAD preferred) • CI/CD : Jenkins, GitLab CI, GitHub Actions, CircleCI • IaC : Terraform, CloudFormation, Pulumi, Ansible • Monitoring : Prometheus, Grafana, ELK Stack, Datadog, New Relic Programming & Scripting • Python (advanced) - primary language for ML operations and automation • Bash/Shell scripting for automation and system administration • YAML/JSON for configuration management and APIs • SQL for data operations and analytics • Basic understanding of Go or Java (advantage) Data Technologies • Data Pipeline Tools : Apache Airflow, Prefect, Dagster, Apache NiFi • Streaming & Real-time : Apache Kafka, Apache Spark, Apache Flink, Redis • Databases : PostgreSQL, MongoDB, Elasticsearch, ClickHouse • Data Warehousing : Snowflake, BigQuery, Redshift, Databricks • Data Versioning : DVC, LakeFS, Pachyderm Preferred Qualifications Advanced Technical Skills • Enterprise Security : Experience with enterprise security frameworks, compliance (SOC2, ISO27001) • High-scale Processing : Experience with petabyte-scale data processing and real-time analytics • Performance Optimization : Advanced system optimization, distributed computing, caching strategies • API Development : REST/GraphQL APIs, microservices architecture, API gateways Enterprise & Domain Experience • Previous experience with enterprise clients or B2B SaaS platforms • Experience with compliance-heavy industries (finance, healthcare, government) • Understanding of data privacy regulations (GDPR, SOX, HIPAA) • Experience with multi-tenant enterprise architectures Leadership & Collaboration • Experience mentoring junior engineers and technical team leadership • Strong collaboration with data science teams , product managers , and enterprise clients • Experience with agile methodologies and enterprise project management • Understanding of business metrics , SLAs , and enterprise ROI Growth Opportunities • Career Path : Clear progression to Lead DevOps Engineer or Head of Infrastructure • Technical Growth : Work with cutting-edge enterprise AI/ML technologies • Leadership : Opportunity to build and lead the DevOps/Infrastructure team • Industry Exposure : Work with Government & MNCs enterprise clients and cutting-edge technology stacks Success Metrics & KPIs Technical KPIs • System Uptime : Maintain 99.9%+ availability for enterprise clients • Deployment Frequency : Enable daily deployments with zero downtime • Performance : Ensure optimal response times and system performance • Cost Optimization : Achieve 20-30% annual infrastructure cost reduction • Security : Zero security incidents and full compliance adherence Business Impact • Time to Market : Reduce deployment cycles and improve development velocity • Client Satisfaction : Maintain 95%+ enterprise client satisfaction scores • Team Productivity : Improve engineering team efficiency by 40%+ • Scalability : Support rapid client base growth without infrastructure constraints Why Join Us Be part of a forward-thinking, innovation-driven company with a strong engineering culture. Influence high-impact architectural decisions that shape mission-critical systems. Work with cutting-edge technologies and a passionate team of professionals. Competitive compensation, flexible working environment, and continuous learning opportunities. How to Apply Please submit your resume and a cover letter outlining your relevant experience and how you can contribute to Aaizel Tech Labs’ success. Send your application to hr@aaizeltech.com , bhavik@aaizeltech.com or anju@aaizeltech.com.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Kozhikode, Kerala, India

On-site

Job Summary: We are looking for a passionate AI & IoT Engineer to lead and deliver technical training in Artificial Intelligence and the Internet of Things . This is a dedicated training and development role focused on building the next generation of tech talent. If you love teaching, mentoring, and breaking down complex technical topics, this role is for you. You will conduct sessions, guide hands-on labs, mentor learners, and help them apply AI & IoT concepts in real-world projects. Key Responsibilities: Conduct structured training programs on AI/ML and IoT (sensors, microcontrollers, platforms). Design and deliver hands-on labs and project-based learning activities. Create and update course materials including slides, assignments, quizzes, and project documentation. Provide mentorship and technical support to learners, ensuring their progress and understanding. Evaluate learner performance and support certification or assessment processes. Stay updated with current technologies and incorporate relevant tools and trends into the curriculum. Organize and participate in internal workshops, webinars, tech talks, and bootcamps. Coordinate with the academic or training team to align sessions with learning outcomes. Required Skills: Strong knowledge of Python and AI frameworks such as TensorFlow, Keras, or Scikit-learn. Familiarity with IoT devices and platforms (Arduino, Raspberry Pi, NodeMCU, etc.). Understanding of communication protocols like MQTT, HTTP, and Bluetooth. Clear and engaging communication skills with a flair for teaching and simplifying complex concepts. Strong analytical and problem-solving skills. Ability to create engaging learning materials and mentor effectively. Eagerness to continuously learn and improve training methodologies. To Apply: Send your resume to careers@corecognitics.com

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Pune

Work from Office

Project description We are seeking a skilled AI Agentic Developer in implementing solutions within financial institutions. Responsibilities Build end-to-end Gen AI solutions develop, refine, and implement advanced Gen AI models and ensure the success delivery of projects Lead the integration of LLMs and LangChain into business processes. Utilize Python and other data manipulation languages proficiently to prepare and manipulate data. Understand the business requirements and translate into Gen AI solution design that successfully meets the business objectives. Collaborate with stakeholders, presenting findings to a non-technical audience and providing strategic recommendations. Stay current with technical and industry developments and standards to ensure effective and advanced applications of data analysis techniques and methodologies. Skills Must have Min Bachelor's degree in Computer science, Mathematics, Engineering, Statistics, or a related field. At least 3 years of experience with Generative AI, specifically with Large Language Models (LLM) and Langchain. Proficiency in Python and other applicable programming languages. Strong knowledge of machine learning, data mining, and predictive modeling. Excellent understanding of machine learning algorithms, processes, tools, and platforms. Possess strong problem-solving and strategic thinking abilities. Knowledge and experience in end-to-end project delivery, especially agile delivery methodologies or hybrid approaches Agentic AI / Generative AI solution design and implementation Exceptional communication, documentation and presentation skills and stakeholder management experiences Nice to have Knowledge of Agile Other Languages English: C1 Advanced Seniority Regular

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Thane, Maharashtra, India

On-site

Company Description Incorporated in 1997, Compusoft Advisors India Pvt. Ltd. is a Software Solutions Provider that aims to be a valuable partner in its clients' success. The company offers world-class Enterprise Business Applications, Cloud Solutions, and Mobility Solutions at an affordable price. With a strong presence in India and Singapore, Compusoft serves customers in various domains including Manufacturing, Supply Chain, Retail, Financial Institution, Real Estate, Education, IT and Professional Services, Aero Space, Aviation, Oil & Energy, and more. Role Description This is a full-time role for an AI Developer (Azure AI and Azure Open AI) at Compusoft Advisors India Pvt. Ltd. located in Thane. As an AI Developer, you will be responsible for developing and implementing AI solutions using Azure AI and Azure Open AI. Your tasks will include designing and deploying machine learning models, implementing natural language processing algorithms, and collaborating with cross-functional teams to identify business opportunities and provide customized solutions. Qualifications At least 3 years of experience in developing AI solutions using Azure AI and Azure Open AI platforms. Proficient in Python, C#, dot net or other programming languages. Strong knowledge of AI concepts, techniques, and frameworks such as machine learning, deep learning, natural language processing, computer vision, and conversational AI. Experience in using AI tools and frameworks such as TensorFlow, PyTorch, Keras, Scikit-learn, NLTK, SpaCy, OpenCV, etc. Experience in using cloud services and technologies such as Azure DevOps, Azure Data Factory, Azure SQL Database, Azure Cosmos DB, Azure Storage, etc. Excellent communication, problem-solving, and analytical skills. Passionate about AI and eager to learn new skills and technologies. Bachelor's degree or higher in Computer Science, Engineering, Data Science, or related field.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description As a key member of our team, you will support our product development teams with insights gained from analyzing company data with respect to potential opportunities for product and process optimization. You must have strong experience using a variety of data mining/data analysis methods. The Senior Data Scientist II will lead data modeling and machine learning projects using a variety of data tools, modeling approaches and algorithms. You will also design experiments to evaluate models by conducting AB testing and simulations. Key Responsibilities Able to translate product design requirements into pipeline of heuristic and stochastic algorithms Understand exploratory data analysis for product feasibility studies and ground truth testing Execute SQL queries and/or python scripts to manipulate, analyze and visualize data Able to implement explainable AI solutions and rationalize model inferences Follows SDLC processes, Adopt agile-based processes/meetings and peer code-reviews Works with Machine learning engineer/architect to deploy data products into production Follows and understands legal data use restrictions Contributes to algorithm library development and design for ML, NLP and XAI Delivers product pipelines for deployment to production Builds applications that integrate third party and self-hosted foundation models. Fine tunes open source foundation models (LLMs, VLMs) with proprietary data Develops autonomous AI inference and tool use orchestration using ReAct AI agents Provides root cause analysis for machine learning model inference Completes data analysis or processing tasks as directed Documents data product end to end design and development Data annotation, labeling and other related data generation activities Provides thought leadership for rest of team and seeks out opportunities to mentor more junior team members Presents and holds data product updates and trainings Updates team on data product performance Education & Experience Strong problem solving skills with an emphasis on product development. 6+ years of experience developing data science products Strong experience using and optimizing common python machine and deep learning libraries such as Scikit learn, PyTorch, TensorFlow, Keras, MXNet and Spark MLlib Experience using statistical computer languages (Python, R, Scala, SQL, etc.) to manipulate data and draw insights from large data sets. Hands-on generative AI development Experience using foundation models (LLMs, VLMs) Experience with model fine tuning of open source foundation models with proprietary data Experience leveraging AI metrics for monitoring and value tracking Knowledge of AI Agent frameworks with recent hands-on experience building an AI agent able to autonomously use data stores, tools and other AI models to solve inquiries. Deep knowledge of data science concepts and related product development lifecycle Experience using machine learning libraries such as TensorFlow, Keras, SparkML etc. Working knowledge of machine learning tuning optimization procedures Experience working with and creating data architectures. Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications. Excellent written and verbal communication skills for coordinating across teams. A drive to learn and master new technologies and techniques. Preferred Experience with big data analytical frameworks such as Spark/PySpark Experience analyzing data from 3rd party providers: Google Knowledge Graph, Wikidata, etc. Experience visualizing/presenting data for stakeholders using: Looker, PowerBI, Tableau, etc.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Machine Learning Engineer, you will utilize your expertise to contribute to the development of a world-leading capability in the domain of machine learning. Your responsibilities will involve participating in the design, development, deployment, testing, maintenance, and enhancement of machine learning software solutions. You will be tasked with applying machine learning, deep learning, and signal processing techniques to large datasets such as audio, sensors, images, videos, and text in order to develop models. Additionally, you will be responsible for architecting large-scale data analytics and modeling systems, designing and programming machine learning methods, and integrating them into the ML framework and pipeline. Collaboration with data scientists and analysts to support the development of ML data pipelines, platforms, and infrastructure will be a key aspect of your role. You will also evaluate and validate analysis using statistical methods and present findings in a clear manner to individuals not well-versed in the fields of data science and computer science. Furthermore, you will create microservices and APIs for serving ML models and services, evaluate new machine learning methods for adoption, and conduct feature engineering to enhance model performance. Your background should include knowledge of recent advances in machine learning, deep learning, natural language processing, and image/signal/video processing, with at least 5 years of professional experience in real-world applications. Strong programming skills in languages such as Python, Pytorch, MATLAB, C/C++, and Java, along with familiarity with software engineering concepts like OOP and design patterns, are essential. Proficiency in machine learning libraries such as TensorFlow, Keras, scikit-learn, and PyTorch is required. Additionally, you should possess excellent mathematical abilities, including expertise in accuracy, significance tests, visualization, and advanced probability concepts. Experience in architecting and implementing end-to-end solutions for accelerating experimentation and model building is highly desirable. A working knowledge of various machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and the ability to conduct independent and collaborative research are crucial for success in this role. Strong written and verbal communication skills are also important. Candidates with a B.E./B.Tech./B.S. degree and at least 3 years of experience in machine learning, deep learning, natural language processing, or image/signal processing will be considered. Preferably, candidates with an M.E./M.S./M.Tech./PhD in fields related to Computer Science and experience in machine learning, image and signal processing, or statistics are preferred for this position.,

Posted 2 weeks ago

Apply

3.0 - 8.0 years

13 - 17 Lacs

Hyderabad

Work from Office

Job Area: Engineering Group, Engineering Group > Systems Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Systems Engineer, you will research, design, develop, simulate, and/or validate systems-level software, hardware, architecture, algorithms, and solutions that enables the development of cutting-edge technology. Qualcomm Systems Engineers collaborate across functional teams to meet and exceed system-level requirements and standards. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Systems Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Systems Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 1+ year of Systems Engineering or related work experience. Candidate should have 10+ years of experience Experience in C/C++, Computer vision/ Image processing is must Experience in camera technology, ML/DL is good to have Experience in Embedded/arm programming is good to have but not necessary Responsibilities The job responsibilities may include a subset of the following Designing computer vision /image processing for mobile devices Designing and evaluating algorithms to be implemented in hardware on software prototypes Developing or Optimizing image processing and computer vision algorithms for HW acceleration Support product teams for commercialization, such as solution optimization, performance profiling and benchmarking. Test regression and release support Preferred Qualifications: Exposure or working experience in Vision or Multimedia accelerators Working experience with image processing algorithms. Knowledge/working experience in computer vision algorithms Strong knowledge in data structures and working experience with C/C++ programming Software optimizations experience in various SIMD and multi-threading Applicants Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

14 - 18 Lacs

Bengaluru

Work from Office

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: CPU architecture team is driving the core math libraries needed for ML/AI acceleration. This position/s will expose you to Qualcomms cutting edge SoC and ML/AI platforms in the industry. Participate in Optimizing the core ML kernels using the latest advancements like SME, SVE of the ARM CPU architecture and enhance the performance of the ML models on the CPU of the QCOM Soc Required Skills== 7+ experince with Understanding of ARM CPU architecture fundamentals and ARM Arch64 ISA Optimizing kernels for vector Processors Understanding of the basic linear algebra functions used in AI/ML Algorithm design (logic, critical thinking) Performance Evaluation and Optimization of the applications for ARM architecture Inferencing of the ML models written in Pytorch/TensorFlow/Keras Understanding of the typical Open-Source Library framework design Preferred Skills=== Strong Programming skills and deep understanding of the ARM ISA understanding of the algorithms suitable for Vector and matrix accelerators Strong Analytical and debugging skills Good understanding of Optimizing the Linear Algebra algorithms Performance evaluation using QEMU, Simulators, Emulators and on Real Hardware Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. 2+ years of work experience with Programming Language such as C, C++, Java, Python, etc. Applicants Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

19 - 25 Lacs

Bengaluru

Work from Office

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. 2+ years of work experience with Programming Language such as C, C++, Java, Python, etc. Experience - 2 to 12 years Location - Bangalore General Summary As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Machine Learning Engineer, you will create and implement machine learning techniques, frameworks, and tools that enable the efficient utilization of state-of-the-art machine learning solutions over a broad set of technology verticals or designs. In this position you will be responsible for assisting with the software design and development of the Qualcomm AI Stack, SDKs and associated tools, targeting Snapdragon platforms. You will have the opportunity to show your passion for software design and development with your analytical, design, programming, and debugging skills. Responsibilities Software development of the AI orchestration framework, engine, and tools to develop agentic workflows and execution of the latest Neural Networks on Snapdragon chips. Validate the performance and accuracy of your software through detailed analysis and test of the use cases. Minimum Qualifications Software development experience using C/C++ and/or Python Strong software development skills (e.g. data structure and algorithm design, object oriented or other software design paradigm knowledge, software debugging and testing, etc.) Experience in developing applications using Inter-Process Communication (IPC), like AIDL or SOME/IP. Strong communication skills (verbal, presentation, written) Preferred Qualifications Experience in developing embedded applications with low level interactions between operating systems (e.g., Linux, Android, Windows, QNX) and Hardware. Experience using/integrating Qualcomm AI Stack products (e.g. QNN, SNPE, QAIRT) Experience with LLM, LVM, LMM models, and other NN architectures. Experience with Machine Learning frameworks (e.g., Tensor Flow, Pytorch, Keras). Software development experience with Java Ability to collaborate across a globally diverse team and multiple interests Education Requirements RequiredBachelor's degree in Engineering, Information Systems, Computer Science, or related field. PreferredBachelor's Computer Science, Computer Engineering, or Electrical Engineering Applicants Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 2 weeks ago

Apply

1.0 - 3.0 years

8 - 12 Lacs

Bengaluru

Work from Office

All Peoples Church & World Outreach is looking for AI/ML Software Engineer to join our dynamic team and embark on a rewarding career journey Developing and directing software system validation and testing methods. Directing our software programming initiatives Overseeing the development of documentation. Working closely with clients and cross-functional departments to communicate project statuses and proposals. Analyzing data to effectively coordinate the installation of new systems or the modification of existing systems. Managing the software development lifecycle. Monitoring system performance. Communicating key project data to team members and building cohesion among teams. Developing and executing project plans. Applying mathematics and statistics to problem-solving initiatives. Applying best practices and standard operating procedures. Creating innovative solutions to meet our companys technical needs. Testing new software and fixing bugs. Shaping the future of our systems.

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Role and Responsibilities: · SME for clients: Serve as the primary point of contact for clients in Gen AI needs, including senior executives, to understand their needs and deliver tailored generative AI solutions. · Project Leadership: Lead multiple generative AI projects, ensuring timely delivery and high-quality outcomes. Define project deliverables, timelines, and methodologies. · Team Management: Manage and mentor a team of AI specialists, fostering a collaborative and innovative environment. · Solution Design: Architect and implement advanced generative AI models, including GANs, VAEs, and transformer-based models, to solve complex business problems. · Thought Leadership: Provide strategic insights and thought leadership in generative AI, driving innovation and best practices within the organization. · Technical Expertise: Stay abreast of the latest advancements in generative AI and machine learning, integrating cutting-edge technologies into projects. Qualifications: · Educational Background: Master's or PhD in computer science, engineering, mathematics, or related fields from top-tier institutions. · Experience: 8+ years of experience in AI and machine learning, with a focus on generative AI, deep learning, and NLP. Proven track record in delivering generative AI projects. · Technical Skills: Proficiency in Python, TensorFlow, PyTorch, Keras, and cloud platforms (AWS, Azure, GCP). Experience with large-scale data processing and model deployment. · Certifications: Relevant certifications in AI and machine learning, such as TensorFlow Developer Certificate, AWS Certified Machine Learning – Specialty, or similar. (Preferred not mandatory) · Leadership: Demonstrable leadership ability, with experience managing and mentoring teams. · Problem-Solving: Superior problem-solving skills, with a track record of delivering innovative generative AI solutions. · Communication: Excellent communication and presentation skills, with the ability to articulate complex technical concepts to non-technical stakeholders. · Industry Knowledge: Deep understanding of industry trends and challenges in generative AI and machine learning.

Posted 2 weeks ago

Apply

2.0 - 3.0 years

5 - 9 Lacs

Kochi

Work from Office

Job Title - Data Engineer Sr.Analyst ACS Song Management Level:Level 10- Sr. Analyst Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture (do not remove the hyperlink)Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 2 weeks ago

Apply

1.0 - 2.0 years

30 - 35 Lacs

Kolkata

Work from Office

Job Title - S&C Global Network - AI - Retail - AI/Gen AI Retail Analyst Management Level:11 - Analyst Location:Bangalore / Gurgaon / Mumbai / Chennai / Pune / Hyderabad / Kolkata Must have skills: Strong understanding of retail industry Ability to work with large datasets from different sources (e.g., transactional data, customer data, social media data) and data preprocessing techniques. Proficiency in supervised and unsupervised learning algorithms Proficiency in Python, with experience using libraries such as pandas, scikit-learn, and NumPy. Exposure to deep learning frameworks like TensorFlow, PyTorch, or Keras. Basic understanding of LLMs, NLP, and prompt engineering. Good to have skills: Familiarity with SQL, NoSQL databases, and big data technologies like Spark for handling large-scale data. Familiarity with transformer-based models such as BERT, GPT, and T5 for understanding and generating human language. Experience with Generative Pre-trained Transformers (GPT) Experience with Data Visualization Tools Job Summary : We are looking for a highly motivated AI / GenAI Retail Analyst to join our data science and AI team. In this role, you will play a key part in building and implementing AI-driven solutions that enhance customer experience, streamline operations, and drive business outcomes in the retail sector. You'll work closely with cross-functional teams to extract insights from data and support the development of generative AI applications. Roles & Responsibilities: Leverage Retail Knowledge:Utilize your deep understanding of the retail industry (merchandising, customer behavior, product lifecycle) to design AI solutions that address critical retail business needs. Design, develop, and implement AI algorithms (machine learning, deep learning, etc.) and generative models (e.g., GPT, GANs) tailored to the retail sector. Focus on the creation of AI models that can help in wide range of use cases across customer lifecycle, personalized customer experiences, optimize pricing, and product assortment. Analyze and preprocess large datasets from various sources (sales data, customer data, inventory data) to build effective AI models. Translate data insights into actionable strategies for marketing, merchandising, and customer experience teams. For example, identifying cross-selling opportunities or optimizing product assortments. Build dashboards, visualizations, and reports to communicate findings, model performance, and business impact to stakeholders. Professional & Technical Skills : Expertise in machine learning, natural language processing (NLP), and generative AI models. Experience with platforms like TensorFlow, PyTorch, and OpenAI technologies. Knowledge of AI ethics and responsible AI practices. Additional Information: About Our Company | AccentureQualification Experience: Minimum 1-2 year(s) of relevant experience is required Educational Qualification: Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or a related field

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies