Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
50.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About The Opportunity Job Type: Permanent Application Deadline: 31 July 2025 Job Description Title Senior Test Analyst Department ISS DELIVERY - DEVELOPMENT - GURGAON Location INB905E Level 3 We’re proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our ISS Delivery team and feel like you’re part of something bigger. About Your Team The Investment Solutions Services (ISS) delivery team provides team provides systems development, implementation and support services for FIL’s global Investment Management businesses across asset management lifecyle. We support Fund Managers, Research Analysts, Traders and Investment Services Operations in all of FIL’s international locations, including London, Hong Kong, and Tokyo About Your Role You will be joining this position as Senior Test Analyst in QA chapter, and therefore be responsible for executing testing activities for all applications under IM technology based out of India. Here are the expectations and probably how your day in a job will look like Understand business needs and analyse requirements and user stories to carry out different testing activities. Collaborate with developers and BA’s to understand new features, bug fixes, and changes in the codebase. Create and execute functional as well as automated test cases on different test environments to validate the functionality Log defects in defect tracker and work with PM’s and devs to prioritise and resolve them. Develop and maintain automation script , preferably using python stack. Deep understanding of databases both relational as well as non-relational. Document test cases , results and any other issues encountered during testing. Attend team meetings and stand ups to discuss progress, risks and any issues that affects project deliveries Stay updated with new tools, techniques and industry trends. About You Seasoned Software Test analyst with more than 5+ years of hands on experience Hands-on experience in automating web and backend automation using open source tools ( Playwright, pytest, Selenium, request, Rest Assured, numpy , pandas). Proficiency in writing and understanding complex db queries in various databases ( Oracle, Snowflake) Good understanding of cloud ( AWS , Azure) Preferable to have finance investment domain. Preferable to have trade lifecycle experience in a vendor system such as CRD, Aladdin, etc Strong logical reasoning and problem solving skills. Preferred programming language Python and Java. Familiarity with CI/CD tools (e.g., Jenkins) for automating deployment and testing workflows Feel rewarded For starters, we’ll offer you a comprehensive benefits package. We’ll value your wellbeing and support your development. And we’ll be as flexible as we can about where and when you work – finding a balance that works for all of us. It’s all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com.
Posted 3 weeks ago
50.0 years
5 - 7 Lacs
Gurgaon
On-site
About the Opportunity Job Type: Permanent Application Deadline: 31 July 2025 Job Description Title Senior Test Analyst Department ISS DELIVERY - DEVELOPMENT - GURGAON Location INB905E Level 3 We’re proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our ISS Delivery team and feel like you’re part of something bigger. About your team The Investment Solutions Services (ISS) delivery team provides team provides systems development, implementation and support services for FIL’s global Investment Management businesses across asset management lifecyle. We support Fund Managers, Research Analysts, Traders and Investment Services Operations in all of FIL’s international locations, including London, Hong Kong, and Tokyo About your role You will be joining this position as Senior Test Analyst in QA chapter, and therefore be responsible for executing testing activities for all applications under IM technology based out of India. Here are the expectations and probably how your day in a job will look like Understand business needs and analyse requirements and user stories to carry out different testing activities. Collaborate with developers and BA’s to understand new features, bug fixes, and changes in the codebase. Create and execute functional as well as automated test cases on different test environments to validate the functionality Log defects in defect tracker and work with PM’s and devs to prioritise and resolve them. Develop and maintain automation script , preferably using python stack. Deep understanding of databases both relational as well as non-relational. Document test cases , results and any other issues encountered during testing. Attend team meetings and stand ups to discuss progress, risks and any issues that affects project deliveries Stay updated with new tools, techniques and industry trends. About You Seasoned Software Test analyst with more than 5+ years of hands on experience Hands-on experience in automating web and backend automation using open source tools ( Playwright, pytest, Selenium, request, Rest Assured, numpy , pandas). Proficiency in writing and understanding complex db queries in various databases ( Oracle, Snowflake) Good understanding of cloud ( AWS , Azure) Preferable to have finance investment domain. Preferable to have trade lifecycle experience in a vendor system such as CRD, Aladdin, etc Strong logical reasoning and problem solving skills. Preferred programming language Python and Java. Familiarity with CI/CD tools (e.g., Jenkins) for automating deployment and testing workflows Feel rewarded For starters, we’ll offer you a comprehensive benefits package. We’ll value your wellbeing and support your development. And we’ll be as flexible as we can about where and when you work – finding a balance that works for all of us. It’s all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com.
Posted 3 weeks ago
3.0 - 5.0 years
9 - 13 Lacs
Gurugram
Work from Office
Job Summary Synechron is seeking a detail-oriented Data Analyst to leverage advanced data analysis, visualization, and insights to support our business objectives. The ideal candidate will have a strong background in creating interactive dashboards, performing complex data manipulations using SQL and Python, and automating workflows to drive efficiency. Familiarity with cloud platforms such as AWS is a plus, enabling optimization of data storage and processing solutions. This role will enable data-driven decision-making across teams, contributing to strategic growth and operational excellence. Software Requirements Required: PowerBI (or equivalent visualization tools like Streamlit, Dash) SQL (for data extraction, manipulation, and querying) Python (for scripting, automation, and advanced analysis) Data management tools compatible with cloud platforms (e.g., AWS S3, Redshift, or similar) Preferred: Cloud platform familiarity, especially AWS services related to data storage and processing Knowledge of other visualization platforms (Tableau, Looker) Familiarity with source control systems (e.g., Git) Overall Responsibilities Develop, redesign, and maintain interactive dashboards and visualization tools to provide actionable insights. Perform complex data analysis, transformations, and validation using SQL and Python. Automate data workflows, reporting, and visualizations to streamline processes. Collaborate with business teams to understand data needs and translate them into effective visual and analytical solutions. Support data extraction, cleaning, and validation from various sources, ensuring data accuracy. Maintain and enhance understanding of cloud environments, especially AWS, to optimize data storage, processing pipelines, and scalability. Document technical procedures and contribute to best practices for data management and reporting. Performance Outcomes: Timely, accurate, and insightful dashboards and reports. Increased automation reducing manual effort. Clear communication of insights and data-driven recommendations to stakeholders. Technical Skills (By Category) Programming Languages: Essential: SQL, Python Preferred: R, additional scripting languages Databases/Data Management: Essential: Relational databases (SQL Server, MySQL, Oracle) Preferred: NoSQL databases like MongoDB, cloud data warehouses (AWS Redshift, Snowflake) Cloud Technologies: Essential: Basic understanding of AWS cloud services (S3, EC2, RDS) Preferred: Experience with cloud-native data solutions and deployment Frameworks and Libraries: Python: Pandas, NumPy, Matplotlib, Seaborn, Plotly, Streamlit, Dash Visualization: PowerBI, Tableau (preferred) Development Tools and Methodologies: Version control: Git Automation tools for workflows and reporting Familiarity with Agile methodologies Security Protocols: Awareness of data security best practices and compliance standards in cloud environments Experience Requirements 3-5 years of experience in data analysis, visualization, or related data roles. Proven ability to deliver insightful dashboards, reports, and analysis. Experience working across teams and communicating complex insights clearly. Knowledge of cloud environments like AWS or other cloud providers is desirable. Experience in a business environment, not necessarily as a full-time developer, but as an analytical influencer. Day-to-Day Activities Collaborate with stakeholders to gather requirements and define data visualization strategies. Design and maintain dashboards using PowerBI, Streamlit, Dash, or similar tools. Extract, transform, and analyze data using SQL and Python scripts. Automate recurring workflows and report generation to improve operational efficiencies. Troubleshoot data issues and derive insights to support decision-making. Monitor and optimize cloud data storage and processing pipelines. Present findings to business units, translating technical outputs into actionable recommendations. Qualifications Bachelors degree in Computer Science, Data Science, Statistics, or related field. Masters degree is a plus. Relevant certifications (e.g., PowerBI, AWS Data Analytics) are advantageous. Demonstrated experience with data visualization and scripting tools. Continuous learning mindset to stay updated on new data analysis trends and cloud innovations. Professional Competencies Strong analytical and problem-solving skills. Effective communication, with the ability to explain complex insights clearly. Collaborative team player with stakeholder management skills. Adaptability to rapidly changing data or project environments. Innovative mindset to suggest and implement data-driven solutions. Organized, self-motivated, and capable of managing multiple priorities efficiently.
Posted 3 weeks ago
0 years
1 - 1 Lacs
Mohali
On-site
About the Role We are looking for a passionate Data Science fresher who has completed at least 6 months of practical training, internship, or project experience in the data science field. This is an exciting opportunity to apply your analytical and problem-solving skills to real-world datasets while working closely with experienced data scientists and engineers. Key Responsibilities Assist in data collection, cleaning, and preprocessing from various sources. Support the team in building, evaluating, and optimizing ML models . Perform exploratory data analysis (EDA) to derive insights and patterns. Work on data visualization dashboards and reports using tools like Power BI, Tableau, or Matplotlib/Seaborn. Collaborate with senior data scientists and domain experts on ongoing projects. Document findings, code, and models in a structured manner. Continuously learn and adopt new techniques, tools, and frameworks. Required Skills & Qualifications Education: Bachelor’s degree in Computer Science, Statistics, Mathematics, Engineering, or a related field. Experience: Minimum 6 months internship/training in data science, analytics, or machine learning. Technical Skills: Proficiency in Python (Pandas, NumPy, Scikit-learn, etc.). Understanding of machine learning algorithms (supervised/unsupervised). Knowledge of SQL and database concepts. Familiarity with data visualization tools/libraries. Basic understanding of statistics and probability. Soft Skills: Strong analytical thinking and problem-solving ability. Good communication and teamwork skills. Eagerness to learn and grow in a dynamic environment. Good to Have (Optional) Exposure to cloud platforms (AWS, GCP, Azure). Experience with big data tools (Spark, Hadoop). Knowledge of deep learning frameworks (TensorFlow, PyTorch). What We Offer Opportunity to work on real-world data science projects . Mentorship from experienced professionals in the field. A collaborative, innovative, and supportive work environment. Growth path to become a full-time Data Scientist with us. Job Types: Full-time, Permanent, Fresher Pay: ₹10,000.00 - ₹15,000.00 per month Benefits: Health insurance Schedule: Day shift Fixed shift Monday to Friday Application Question(s): have you done your 6 month training ? Education: Bachelor's (Preferred) Language: English (Preferred) Work Location: In person
Posted 3 weeks ago
6.0 - 9.0 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Hybrid
Hi Everyone, Experience : 6-9yrs Work Mode: Hybrid Work location : Chennai/Bangalore/Pune Location Notice Period : Immediate - 30 days Role : Data Scientist Skills and Experience: 5 year with Data science and ML exposure Key Accountabilities & Responsibilities Support the Data Science team with the development of advanced analytics/machine learning/artificial intelligence initiatives Analyzing large and complex datasets to uncover trends and insights. Supporting the development of predictive models and machine learning workflows. Performing exploratory data analysis to guide product and business decisions. Collaborating with cross-functional teams, including product, marketing, and engineering. Assisting with the design and maintenance of data pipelines. Clearly documenting and communicating analytical findings to technical and non-technical stakeholders. Basic Qualifications: Qualificiation in Data Science, Statistics, Computer Science, Mathematics, or a related field. Proficiency in Python and key data science libraries (e.g., pandas, NumPy, scikit-learn). Operational understanding of machine learning principles and statistical modeling. Experience with SQL for data querying. Strong communication skills and a collaborative mindset. Preferred Qualifications Exposure to cloud platforms such as AWS, GCP, or Azure. Familiarity with data visualization tools like Tableau, Power BI, or matplotlib. Participation in personal data science projects or online competitions (e.g., Kaggle). Understanding of version control systems like Git. Kindly, share the following details : Updated CV Relevant Skills Total Experience Current Company Current CTC Expected CTC Notice Period Current Location Preferred Location
Posted 3 weeks ago
2.0 years
2 - 9 Lacs
Bengaluru
On-site
Updraft. Helping you make changes that pay off. Updraft is an award winning, FCA-authorised, high-growth fintech based in London. Our vision is to revolutionise the way people spend and think about money, by automating the day to day decisions involved in managing money and mainstream borrowings like credit cards, overdrafts and other loans. A 360 degree spending view across all your financial accounts (using Open banking) A free credit report with tips and guidance to help improve your credit score Native AI led personalised financial planning to help users manage money, pay off their debts and improve their credit scores Intelligent lending products to help reduce cost of credit We have built scale and are getting well recognised in the UK fintech ecosystem. 800k+ users of the mobile app that has helped users swap c £500 m of costly credit-card debt for smarter credit, putting hundreds of thousands on a path to better financial health The product is highly rated by our customers. We are rated 4.8 on Trustpilot, 4.8 on the Play Store, and 4.4 on the iOS Store We are selected for Technation Future Fifty 2025 - a program that recognizes and supports successful and innovative scaleups to IPOs - 30% of UK unicorns have come out of this program. Updraft once again featured on the Sifted 100 UK startups - among only 25 companies to have made the list over both years 2024 and 2025 We are looking for exceptional talent to join us on our next stage of growth with a compelling proposition - purpose you can feel, impact you can measure, and ownership you'll actually hold. Expect a hybrid, London-hub culture where cross-functional squads tackle real-world problems with cutting-edge tech; generous learning budgets and wellness benefits; and the freedom to experiment, ship, and see your work reflected in customers' financial freedom. At Updraft, you'll help build a fairer credit system. Role and Responsibilities Join our Analytics team to deliver cutting edge solutions. Support business and operation teams on making better data driven decisions by ingesting new data sources, creating intuitive dashboards and producing data insights Build new data processing workflows to extract data from core systems for analytic products Maintain and improve existing data processing workflows. Contribute to optimizing and maintaining the production data pipelines, including system and process improvements Contribute to the development of analytical products and dashboards with integration of internal and third-party data sources/ APIs Contribute to cataloguing and documentation of data Requirements Bachelor’s degree in mathematics, statistics, computer science or related field 2-5 years experience working in data engineering/analyst and related fields Advanced analytical framework and experience relating data insight with business problems and creating appropriate dashboards Mandatory required high proficiency in ETL, SQL and database management Experience with AWS services like Glue, Athena, Redshift, Lambda, S3 Python programming experience using data libraries like pandas and numpy etc Interest in machine learning, logistic regression and emerging solutions for data analytics You are comfortable working without direct supervision on outcomes that have a direct impact on the business You are curious about the data and have a desire to ask "why?" Good to have but not mandatory required: Experience in startup or fintech will be considered a great advantage Awareness or Hands-on experience with ML-AI implementation or ML-Ops Certification in AWS foundation Benefits Opportunities to Take Ownership – Work on high-impact projects with real autonomy. Fast Career Growth – Gain exposure to multiple business areas and advance quickly. Be at the Forefront of Innovation – Work on cutting-edge technologies or disruptive ideas. Collaborative & Flat Hierarchy – Work closely with leadership and have a real voice. Dynamic, Fast-Paced Environment – No two days are the same; challenge yourself every day. A Mission-Driven Company – Be part of something that makes a difference
Posted 3 weeks ago
3.0 - 6.0 years
10 - 20 Lacs
Gurugram
Hybrid
Role & responsibilities Highly focused individual with self-driven attitude Problem solving and logical thinking to automate and improve internal processes Using various tools such as SQL and Python for managing the various requirements for different data asset projects. Ability to diligently involve in activities like Data Cleaning, Retrieval, Manipulation, Analytics and Reporting Using data science and statistical techniques to build machine learning models and deal with textual data. Keep up-to-date knowledge of the industry and related markets Ability to multitask, prioritize, and manage time efficiently Understand needs of the hiring organization or client in order to target solutions to their benefit Advanced speaking and writing skills for effective communication Ability to work in cross functional teams demonstrating high level of commitment and coordination Attention to details and commitment to accuracy for the desired deliverable Should demonstrate and develop a sense of ownership towards the assigned task Ability to keep sensitive business information confidential Contribute, positively and extensively towards building the organizational reputation, brand and operational excellence Preferred candidate profile 3-6 years of relevant experience in data science Advanced knowledge of statistics and basics of machine learning Experienced in dealing with textual data and using natural language processing techniques Ability to conduct analysis to extract actionable insights Technical skills in Python (Numpy, Pandas, NLTK, transformers, Spacy), SQL and other programming languages for dealing with large datasets Experienced in data cleaning, manipulation, feature engineering and building models Experienced in the end-to-end development of a data science project Strong interpersonal skills and extremely resourceful Proven ability to complete assigned task according to the outlined scope and timeline Good language, communication and writing skills in English Expertise in using tools like MS Office, PowerPoint, Excel and Word Graduate or Post-graduate from a reputed college or university
Posted 3 weeks ago
1.0 years
3 - 4 Lacs
India
On-site
About Us: Red & White Education Pvt. Ltd., established in 2008, is Gujarats top NSDC & ISO-certified institute focused on skill-based education and global employability. Role Overview: Were hiring a full-time Onsite AI, Machine Learning, and Data Science Faculty/ Trainer with strong communication skills and a passion for teaching, Key Responsibilities: Deliver high-quality lectures on AI, Machine Learning, and Data Science . Design and update course materials, assignments, and projects. Guide students on hands-on projects, real-world applications, and research work. Provide mentorship and support for student learning and career development. Stay updated with the latest trends and advancements in AI/ML and Data Science. Conduct assessments, evaluate student progress, and provide feedback. Participate in curriculum development and improvements. Skills & Tools: Core Skills: ML, Deep Learning, NLP, Computer Vision, Business Intelligence, AI Model Development, Business Analysis. Programming: Python, SQL (Must), Pandas, NumPy, Excel. ML & AI Tools: Scikit-learn (Must), XGBoost, LightGBM, TensorFlow, PyTorch (Must), Keras, Hugging Face. Data Visualization: Tableau, Power BI (Must), Matplotlib, Seaborn, Plotly. NLP & CV: Transformers, BERT, GPT, OpenCV, YOLO, Detectron2. Advanced AI: Transfer Learning, Generative AI, Business Case Studies. Education & Experience Requirements: Bachelor's/Master’s/Ph.D. in Computer Science, AI, Data Science, or a related field. Minimum 1+ years of teaching or industry experience in AI/ML and Data Science. Hands-on experience with Python, SQL, TensorFlow, PyTorch, and other AI/ML tools. Practical exposure to real-world AI applications, model deployment, and business analytics. For further information, please feel free to contact 7862813693 us via email at career@rnwmultimedia.edu.in Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹35,000.00 per month Benefits: Flexible schedule Leave encashment Paid sick time Paid time off Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Experience: Teaching / Mentoring: 1 year (Required) AI: 1 year (Required) ML : 1 year (Required) Data science: 1 year (Required) Work Location: In person
Posted 3 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Data Engineering: Skillsets to perform distributed computing (specifically parallelism and scalability in Data Processing, Modeling and Inferencing through Spark, Dask, RapidsAI or RapidscuDF) Ability to build python-based APIs (e.g.: use of FastAPIs/ Flask/ Django for APIs) Experience in Elastic Search and Apache Solr is a plus, vector databases. Qualifications Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed.
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At Umami Bioworks, we are a leading bioplatform for the development and production of sustainable planetary biosolutions. Through the synthesis of machine learning, multi- omics biomarkers, and digital twins, UMAMI has established market-leading capability for discovery and development of cultivated bioproducts that can seamlessly transition to manufacturing with UMAMI’s modular, automated, plug-and-play production solution By partnering with market leaders as their biomanufacturing solution provider, UMAMI is democratizing access to sustainable blue bioeconomy solutions that address a wide range of global challenges. We’re a venture-backed biotech startup located in Singapore where some of the world’s smartest, most passionate people are pioneering a sustainable food future that is attractive and accessible to people around the world. We are united by our collective drive to ask tough questions, take on challenging problems, and apply cutting-edge science and engineering to create a better future for humanity. At Umami Bioworks, you will be encouraged to dream big and will have the freedom to create, invent, and do the best, most impactful work of your career. Umami Bioworks is looking to hire an inquisitive, innovative, and independent Machine Learning Engineer to join our R&D team in Bangalore, India, to develop scalable, modular ML infrastructure integrating predictive and optimization models across biological and product domains. The role focuses on orchestrating models for media formulation, bioprocess tuning, metabolic modeling, and sensory analysis to drive data-informed R&D. The ideal candidate combines strong software engineering skills with multi-model system experience, collaborating closely with researchers to abstract biological complexity and enhance predictive accuracy. Responsibilities Design and build the overall architecture for a multi-model ML system that integrates distinct models (e.g., media prediction, bioprocess optimization, sensory profile, GEM-based outputs) into a unified decision pipeline Develop robust interfaces between sub-models to enable modularity, information flow, and cross-validation across stages (e.g., outputs of one model feeding into another) Implement model orchestration logic to allow conditional routing, fallback mechanisms, and ensemble strategies across different models Build and maintain pipelines for training, testing, and deploying multiple models across different data domains Optimize inference efficiency and reproducibility by designing clean APIs and containerized deployments Translate conceptual product flow into technical architecture diagrams, integration roadmaps, and modular codebases Implement model monitoring and versioning infrastructure to track performance drift, flag outliers, and allow comparison across iterations Collaborate with data engineers and researchers to abstract away biological complexity and ensure a smooth ML-only engineering focus Lead efforts to refactor and scale ML infrastructure for future integrations (e.g., generative layers, reinforcement learning modules) Qualifications Bachelor’s or Master’s degree in Computer Science, Machine Learning, Computational Biology, Data Science, or a related field Proven experience developing and deploying multi-model machine learning systems in a scientific or numerical domain Exposure to hybrid modeling approaches and/or reinforcement learning strategies Experience Experience with multi-model systems Worked with numerical/scientific datasets (multi-modal datasets) Hybrid modelling and/or RL (AI systems) Core Technical Skills Machine Learning Frameworks: PyTorch, TensorFlow, scikit-learn, XGBoost, CatBoost Model Orchestration: MLflow, Prefect, Airflow Multi-model Systems: Ensemble learning, model stacking, conditional pipelines Reinforcement Learning: RLlib, Stable-Baselines3 Optimization Libraries: Optuna, Hyperopt, GPyOpt Numerical & Scientific Computing: NumPy, SciPy, panda Containerization & Deployment: Docker, FastAPI Workflow Management: Snakemake, Nextflow ETL & Data Pipelines: pandas pipelines, PySpark Data Versioning: Git API Design for modular ML blocks You will work directly with other members of our small but growing team to do cutting-edge science and will have the autonomy to test new ideas and identify better ways to do things.
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Strong proficiency in Python, with deep understanding of object-oriented programming (OOP) principles. Experience designing and implementing modular, reusable, and scalable architectures Experience using Version control systems like GIT and Git Workflows. Familiarity with UML and architectural modeling tools. Expertise in design patterns (e.g., Factory, Singleton, Observer) Experience with Python Packages : NumPy, Pandas, Matplotlib, PyTest, black, flake8 Solid grasp of software development lifecycle (SDLC) and Agile methodologies Knowledge of unit testing frameworks (e.g., pytest) and mocking Interpretation of VBA macros. Performing Code quality checks. Ability to understand Engineering workflows , requirements and communicate with Stakeholders Nice to have: Knowledge on stress engineering tools like ISAMI Fluent in written and spoken English for global collaboration Produces high-quality documentation, reports, and presentations Confident in leading discussions and negotiations in English Skilled in writing clear and concise emails, user stories, and technical specs Comfortable presenting to international stakeholders and executive audiences
Posted 3 weeks ago
5.0 years
10 - 15 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of Weekday's clients Salary range: Rs 1000000 - Rs 1500000 (ie INR 10-15 LPA) Min Experience: 5 years Location: Bengaluru JobType: full-time Requirements We are seeking a highly skilled and experienced Computer Vision Engineer to join our growing AI team. This role is ideal for someone with strong expertise in deep learning and a solid background in real-time video analytics, model deployment, and computer vision applications. You'll be responsible for developing scalable computer vision pipelines and deploying them across cloud and edge environments, helping build intelligent visual systems that solve real-world problems. Key Responsibilities: Model Development & Training: Design, train, and optimize deep learning models for object detection, segmentation, and tracking using frameworks like YOLO, UNet, Mask R-CNN, and Deep SORT. Computer Vision Applications: Build robust pipelines for computer vision applications including image classification, real-time object tracking, and video analytics using OpenCV, NumPy, and TensorFlow/PyTorch. Deployment & Optimization: Deploy trained models on Linux-based GPU systems and edge devices (e.g., Jetson Nano, Google Coral), ensuring low-latency performance and efficient hardware utilization. Real-Time Inference: Implement and optimize real-time inference systems, ensuring minimal delay in video processing pipelines. Model Management: Utilize tools like Docker, Git, and MLflow (or similar) for version control, environment management, and model lifecycle tracking. Collaboration & Documentation: Work cross-functionally with hardware, backend, and software teams. Document designs, architectures, and research findings to ensure reproducibility and scalability. Technical Expertise Required: Languages & Libraries: Advanced proficiency in Python and solid experience with OpenCV, NumPy, and other image processing libraries. Deep Learning Frameworks: Hands-on experience with TensorFlow, PyTorch, and integration with model training pipelines. Computer Vision Models: Object Detection: YOLO (all versions) Segmentation: UNet, Mask R-CNN Tracking: Deep SORT or similar Deployment Skills: Real-time video analytics implementation and optimization Experience with Docker for containerization Version control using Git Model tracking using MLflow or comparable tools Platform Experience: Proven experience in deploying models on Linux-based GPU environments and edge devices (e.g., NVIDIA Jetson family, Coral TPU). Professional & Educational Requirements: Education: B.E./B.Tech/M.Tech in Computer Science, Electrical Engineering, or related discipline. Experience: Minimum 5 years of industry experience in AI/ML with a strong focus on computer vision and system-level design. Proven portfolio of production-level projects in image/video processing or real-time systems. Preferred Qualities: Strong problem-solving and debugging skills Excellent communication and teamwork capabilities A passion for building smart, scalable vision systems A proactive and independent approach to research and implementation
Posted 3 weeks ago
2.5 - 4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview Global Business Services delivers control functions responsible for providing assurance over data and processes used to record risk, P/L, balance sheet and financial results in support of Global Markets, Corporate Treasury and Corporate Investments businesses. There are seven GMO core functions that are critical in ensuring business process control: New Business Development, Trade Capture Substantiation, P&L Validation, Risk/Position Validation, Balance Sheet Substantiation, Event Monitoring, and Front-Back Process Oversight. The QS or Quantitative Services Team is part of the GBS. The Data Science Group is involved in Development, testing and monitoring of Machine Learning Models. Job Description Associate would be involved in entire Machine Learning Development Lifecycle. Responsibilities Collaborate with stake holders and identify opportunities for leveraging huge amount of unstructured financial data. Analyze and model structured data using statistical methods and implement algorithms and software needed to perform analyses Undertake preprocessing of structured/unstructured data and analyze information to discover trends/patterns. Present information using data visualization techniques. Coordinate with different teams to implement the models and monitor outcomes. Build machine learning models to automate processes and reduce operational overhead. Perform continuous model monitoring and model support. Build processes and tools for analyzing model performance and data accuracy. Requirements Education : Degree in Applied Math, Statistics, Computer Science or any other quantitative field from premier institutes Experience : 2.5 to 4 years Certifications (If any) : NA Foundational Skills : Strong technical skills including experience using Python/R and SQL or any object-oriented languages Familiarity with various machine learning algorithms and modelling techniques e.g. Regression (Linear and logit), Classification (SVM, Naïve Bayes,etc.) Banking domain is preferred but not required. Strong analytical/math skills (e.g. statistics, algebra) Familiarity with pandas, numpy, scikit-learn & scipy Basics of Visualization tools & techniques – eg. maptlotlib Desired Skills : Problem-solving aptitude Excellent communication and presentation skills Total Experience not more than 5 years and 2 Years of relevant experience should be preferred Work Timings : 12:00-9:00PM Job Location : Hyderabad
Posted 3 weeks ago
4.0 years
0 Lacs
Mysore, Karnataka, India
On-site
Experience : 4.00 + years Salary : USD 80000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Vadodara) Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - An USA based Series A funded Technology Startup) What do you need for this opportunity? Must have skills required: Generative Models, JAX, Reinforcement Learning, Scikit-learn, Generative AI, Natural Language Processing (NLP), PyTorch, Retrieval-Augmented Generation, Computer Vision An USA based Series A funded Technology Startup is Looking for: Senior Deep Learning Engineer Job Summary: We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities: Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of what’s possible in AI applications. System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments. Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation. Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications and Skills: Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
4.0 years
0 Lacs
Agra, Uttar Pradesh, India
On-site
Experience : 4.00 + years Salary : USD 80000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Vadodara) Placement Type : Full Time Contract for 12 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - An USA based Series A funded Technology Startup) What do you need for this opportunity? Must have skills required: Generative Models, JAX, Reinforcement Learning, Scikit-learn, Generative AI, Natural Language Processing (NLP), PyTorch, Retrieval-Augmented Generation, Computer Vision An USA based Series A funded Technology Startup is Looking for: Senior Deep Learning Engineer Job Summary: We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities: Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of what’s possible in AI applications. System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments. Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation. Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications and Skills: Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Area(s) of responsibility Skills: Data Engineer Experience: 5-9Years Job Location: Pune Technical / Professional Experience Requirement: 5 years Work on migrating existing ETL processes and objects into Azure Synapse, requiring complex optimized stored procedures and functions. Develop & Maintain Data Pipelines: Design, implement, and maintain automated data pipelines from On-prem SQL DB to Azure Synapse using Azure Data Factory. Performance Optimization: Optimize data pipeline performance by identifying and addressing bottlenecks, improving query efficiency, and implementing best practices for data storage and retrieval in Azure Synapse. PL/SQL & Database Architecture: Expertise in PL/SQL, database architecture, and performance tuning of existing procedures and processes. Automation & Python: Help automate day-to-day processes using Python programming, with knowledge of Pandas and NumPy libraries.
Posted 3 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
eGrove Systems is looking for Lead Django Backend Developer to join its team of experts. Skill : Lead Django Backend Developer Exp : 5+Yrs NP : Immediate to 15Days Location : Chennai/Madurai Interested candidate can send your resume to annie@egrovesys.com Required Skills: - 5+ years of Strong experience in Python & 2 years in Django Web framework. Experience or Knowledge in implementing various Design Patterns. Good Understanding of MVC framework & Object-Oriented Programming. Experience in PGSQL / MySQL and MongoDB. Good knowledge in different frameworks, packages & libraries Django/Flask, Django ORM, Unit Test, NumPy, Pandas, Scrapy etc., Experience developing in a Linux environment, GIT & Agile methodology. Good to have knowledge in any one of the JavaScript frameworks: jQuery, Angular, ReactJS. Good to have experience in implementing charts, graphs using various libraries. Good to have experience in Multi-Threading, REST API management. About Company eGrove Systems is a leading IT solutions provider specializing in eCommerce, enterprise application development, AI-driven solutions, digital marketing, and IT consulting services. Established in 2008, we are headquartered in East Brunswick, New Jersey, with a global presence. Our expertise includes custom software development, mobile app solutions, DevOps, cloud services, AI chatbots, SEO automation tools, and workforce learning systems. We focus on delivering scalable, secure, and innovative technology solutions to enterprises, startups, and government agencies. At eGrove Systems, we foster a dynamic and collaborative work culture driven by innovation, continuous learning, and teamwork. We provide our employees with cutting-edge technologies, professional growth opportunities, and a supportive work environment to thrive in their careers.
Posted 3 weeks ago
4.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Role: Data Engineer (2–4 Years Experience) 📍 Location: Jaipur / Pune (Work from Office) 📧 hr@cognitivestars.com | 📞 99291-89819 We're looking for a Data Engineer (2–4 years) who’s excited about building scalable ETL pipelines , working with Azure Data Lake and Databricks , and supporting AI/ML readiness across real-world datasets. What You'll Do: Design robust, reusable Python-based ETL pipelines from systems like SAP & OCPLM Clean & transform large-scale datasets for analytics & ML Work with Azure Data Lake , Databricks , and modern cloud tools Collaborate with analytics teams to support predictive and prescriptive models Drive data automation and ensure data quality & traceability What You’ll Bring: 2–4 years of experience in data engineering or analytics programming Strong skills in Python & SQL Experience with Azure , Databricks , or similar cloud platforms Familiarity with ML concepts (hands-on not mandatory) Ability to understand complex enterprise data even without direct system access Tools You'll Use: Python | Pandas | NumPy | SQL Azure Data Lake | Databricks scikit-learn | XGBoost (as needed)
Posted 3 weeks ago
3.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery Good to have skills : Microsoft SQL Server, Google Cloud Data Services Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary:As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role : Analytics and ModelorProject Role Description : Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making.Must have Skills : Google BigQuery, SSI: NON SSI: Good to Have Skills :SSI: No Technology Specialization NON SSI : Job Requirements : Key Responsibilities : Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX) 1:Proven track record of delivering data integration, data warehousing soln 2: Strong SQL And Hands-on (No FLEX) 2:Exp with data integration and migration projects3:Proficient in BigQuery SQL language (No FLEX) 4:understanding on cloud native services : bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes Exp in cloud solutions, mainly data platform services , GCP Certifications5: Exp in Shell Scripting, Python (NO FLEX), Oracle, SQLTechnical Experience : 1: Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred 2: Strong hands-on experience with building solutions using cloud native services: bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX) 3: Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline 4: Open mindset, ability to quickly adapt new technologies 5: Performance tuning of BigQuery SQL scripts 6: GCP Certified preferred 7: Working in agile environmentProfessional Attributes : 1: Must have good communication skills2: Must have ability to collaborate with different teams and suggest solutions3: Ability to work independently with little supervision or as a team4: Good analytical problem solving skills 5: Good team handling skills Educational Qualification: 15 years of Full time education Additional Information : Candidate should be ready for Shift B and work as individual contributor
Posted 3 weeks ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Google BigQuery Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : Fulltime 15 years qualification Summary:As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role : Analytics and ModelorProject Role Description : Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making.Must have Skills : Google BigQuery, SSI: NON SSI: Good to Have Skills :SSI: No Technology Specialization NON SSI : Job Requirements : Key Responsibilities : Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX) 1:Proven track record of delivering data integration, data warehousing soln 2: Strong SQL And Hands-on (No FLEX) 2:Exp with data integration and migration projects3:Proficient in BigQuery SQL language (No FLEX) 4:understanding on cloud native services : bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes Exp in cloud solutions, mainly data platform services , GCP Certifications5: Exp in Shell Scripting, Python (NO FLEX), Oracle, SQLTechnical Experience : 1: Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred 2: Strong hands-on experience with building solutions using cloud native services: bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX) 3: Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline 4: Open mindset, ability to quickly adapt new technologies 5: Performance tuning of BigQuery SQL scripts 6: GCP Certified preferred 7: Working in agile environmentProfessional Attributes : 1: Must have good communication skills2: Must have ability to collaborate with different teams and suggest solutions3: Ability to work independently with little supervision or as a team4: Good analytical problem solving skills 5: Good team handling skills Educational Qualification: 15 years of Full time education Additional Information : Candidate should be ready for Shift B and work as individual contributor
Posted 3 weeks ago
3.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery Good to have skills : Microsoft SQL Server, Google Cloud Data Services Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary:As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role : Analytics and ModelorProject Role Description : Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making.Must have Skills : Google BigQuery, SSI: NON SSI: Good to Have Skills :SSI: No Technology Specialization NON SSI : Job Requirements : Key Responsibilities : Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX) 1:Proven track record of delivering data integration, data warehousing soln 2: Strong SQL And Hands-on (No FLEX) 2:Exp with data integration and migration projects3:Proficient in BigQuery SQL language (No FLEX) 4:understanding on cloud native services : bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes Exp in cloud solutions, mainly data platform services , GCP Certifications5: Exp in Shell Scripting, Python (NO FLEX), Oracle, SQLTechnical Experience : 1: Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred 2: Strong hands-on experience with building solutions using cloud native services: bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX) 3: Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline 4: Open mindset, ability to quickly adapt new technologies 5: Performance tuning of BigQuery SQL scripts 6: GCP Certified preferred 7: Working in agile environmentProfessional Attributes : 1: Must have good communication skills2: Must have ability to collaborate with different teams and suggest solutions3: Ability to work independently with little supervision or as a team4: Good analytical problem solving skills 5: Good team handling skills Educational Qualification: 15 years of Full time education Additional Information : Candidate should be ready for Shift B and work as individual contributor
Posted 3 weeks ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery Good to have skills : Google Cloud Data Services, Microsoft SQL Server Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role : Analytics and Modelor Project Role Description : Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making. Must have Skills : Google BigQuery, SSI: NON SSI: Good to Have Skills :SSI: No Technology Specialization NON SSI : Job Requirements : Key Responsibilities : Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX) 1:Proven track record of delivering data integration, data warehousing soln 2: Strong SQL And Hands-on (No FLEX) 2:Exp with data integration and migration projects 3:Proficient in BigQuery SQL language (No FLEX) 4:understanding on cloud native services : bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes Exp in cloud solutions, mainly data platform services , GCP Certifications 5: Exp in Shell Scripting, Python (NO FLEX), Oracle, SQL Technical Experience : 1: Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred 2: Strong hands-on experience with building solutions using cloud native services: bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX) 3: Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline 4: Open mindset, ability to quickly adapt new technologies 5: Performance tuning of BigQuery SQL scripts 6: GCP Certified preferred 7: Working in agile environment Professional Attributes : 1: Must have good communication skills 2: Must have ability to collaborate with different teams and suggest solutions 3: Ability to work independently with little supervision or as a team 4: Good analytical problem solving skills 5: Good team handling skills Educational Qualification: 15 years of Full time education Additional Information : Candidate should be ready for Shift B and work as individual contributor
Posted 3 weeks ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery Good to have skills : Google Cloud Data Services, Microsoft SQL Server Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role : Analytics and Modelor Project Role Description : Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making. Must have Skills : Google BigQuery, SSI: NON SSI: Good to Have Skills :SSI: No Technology Specialization NON SSI : Job Requirements : Key Responsibilities : Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX) 1:Proven track record of delivering data integration, data warehousing soln 2: Strong SQL And Hands-on (No FLEX) 2:Exp with data integration and migration projects 3:Proficient in BigQuery SQL language (No FLEX) 4:understanding on cloud native services : bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes Exp in cloud solutions, mainly data platform services , GCP Certifications 5: Exp in Shell Scripting, Python (NO FLEX), Oracle, SQL Technical Experience : 1: Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred 2: Strong hands-on experience with building solutions using cloud native services: bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX) 3: Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline 4: Open mindset, ability to quickly adapt new technologies 5: Performance tuning of BigQuery SQL scripts 6: GCP Certified preferred 7: Working in agile environment Professional Attributes : 1: Must have good communication skills 2: Must have ability to collaborate with different teams and suggest solutions 3: Ability to work independently with little supervision or as a team 4: Good analytical problem solving skills 5: Good team handling skills Educational Qualification: 15 years of Full time education Additional Information : Candidate should be ready for Shift B and work as individual contributor
Posted 3 weeks ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google BigQuery Good to have skills : Microsoft SQL Server, Google Cloud Data Services Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary:As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in enhancing the overall data infrastructure and ensuring that data is accessible and reliable for decision-making purposes. Project Role : Analytics and ModelorProject Role Description : Analyze and model client, market and key performance data. Use analytical tools and techniques to develop business insights and improve decision-making.Must have Skills : Google BigQuery, SSI: NON SSI: Good to Have Skills :SSI: No Technology Specialization NON SSI : Job Requirements : Key Responsibilities : Data Proc, Pub,Sub, Data flow, Kalka Streaming, Looker, SQL (No FLEX) 1:Proven track record of delivering data integration, data warehousing soln 2: Strong SQL And Hands-on (No FLEX) 2:Exp with data integration and migration projects3:Proficient in BigQuery SQL language (No FLEX) 4:understanding on cloud native services : bucket storage, GBQ, cloud function, pub sub, composer, and Kubernetes Exp in cloud solutions, mainly data platform services , GCP Certifications5: Exp in Shell Scripting, Python (NO FLEX), Oracle, SQLTechnical Experience : 1: Expert in Python (NO FLEX). Strong hands-on and strong knowledge in SQL(NO FLEX), Python programming using Pandas, NumPy, deep understanding of various data structure dictionary, array, list, tree etc, experiences in pytest, code coverage skills are preferred 2: Strong hands-on experience with building solutions using cloud native services: bucket storage, Big Query, cloud function, pub sub, composer, and Kubernetes etc. (NO FLEX) 3: Proficiency with tools to automate AZDO CI CD pipelines like Control-M , GitHub, JIRA, confluence , CI CD Pipeline 4: Open mindset, ability to quickly adapt new technologies 5: Performance tuning of BigQuery SQL scripts 6: GCP Certified preferred 7: Working in agile environmentProfessional Attributes : 1: Must have good communication skills2: Must have ability to collaborate with different teams and suggest solutions3: Ability to work independently with little supervision or as a team4: Good analytical problem solving skills 5: Good team handling skills Educational Qualification: 15 years of Full time education Additional Information : Candidate should be ready for Shift B and work as individual contributor
Posted 3 weeks ago
0 years
0 Lacs
India
Remote
To know more, check out work.hike.in. At Hike, we're building the Rush Gaming Universe 🎮 📲 💰 Introduction 📖 At Hike, we're revolutionizing gaming and tech by blending innovation and immersive experiences. With our foray into Web3 gaming, we're exploring uncharted territories to create products that redefine fun and ownership. Join us as we make waves in this exciting new frontier. Hike Code 📝( Our core cultural values ) The Hike Code is our cultural operating system. It is our set of values that guides us operationally on a day to day basis. We have 9 core values{{:} } Top Talent in Every Role → Both a quest for greatness & shared values are important to us 🦸♂ ️ Pro-Sports Team → Strength-based, results driven with a "team-first" attitude ⚽ ️ Customer Obsession → We exist to delight our customers ? ? Innovation & Make Magic → Courage to walk into the unknown and pioneer new fronts ? ? Owner not a Renter → Proactive & radically responsible. Everyone is an owner ? ? Think Deeply → Clear mind, obsession to simplify & data-informed 🙇♀ ️ Move Fast → Ruthless prioritization & move fast 🙋♂ ️ Be curious & keep learning → Curiosity to acquire new perspectives, quickly 👨? ? Dream Big → Courage to climb big mountains ? ? Skills & experience we're looking for 👨? ? Final‑year B.Tech/M.S. student or recent graduate in CS, IT, Math, Stats, or related field | Top Talent in Every Rol e Solid programming abilities in Python with the ML/AI stack (NumPy, Pandas, Scikit‑Learn, TensorFlow) | Top Talent in Every Rol e Good grasp of Data Structures, Algorithms, and basic system‑design concepts | Top Talent in Every Rol e Coursework or projects demonstrating machine‑learning fundamentals (regression, classification, DL models, Agentic AI) | Be Insatiably Curious & Keep Improvin g Familiarity with SQL and eagerness to dive into data pipelines (Kafka, MongoDB, BigQuery, or similar) | Think Deeply & Exercise Good Judgemen t Ability to be self‑directed and learn quickly, with a strong desire to stay on top of the latest AI developments | Be Insatiably Curious & Keep Improvin g Comfort using AI tools—Cursor, GPT, Claude—to accelerate development | Move Fast & Be Dynami c Strong written and verbal communication skills; collaborative mindset | Pro‑Sports Tea m You will be responsible for ? ? Strategy- Work extensively on our Multi‑Agent AI Analytics System, expanding capabilities to deliver conversational insights at scal e Strategy- Design and iterate on ML models powering real‑time personalization, matchmaking, and churn predictio n Strategy- Drive experimentation that boosts engagement, retention, and monetization through user‑level intelligenc e Operations-Monitor real‑time data pipelines that feed anomaly detection, feature stores, and matchmaking service s Operations-Optimize and benchmark ML inference for live gameplay scenarios (spin‑the‑wheel rewards, sticker recommendations, GBM matchmaking ) Collaboration-Partner with product, backend, and design to turn insights into delightful player experience s Collaboration-Champion AI‑driven tooling and workflow automation across the tea m 💰 Benefits → We have tremendous benefits & perks. Check out work.hike.in to know mor e
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |