Jobs
Interviews

497 Scipy Jobs - Page 14

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Data Analyst - Python Job Classification : Full-Time Work Location : Work-From-Office (Hyderabad) Education : BE-BCS, B-Tech-IT, MCA or Equivalent Experience Level : 3 Years (2+ years’ Data Analysis experience) Company Description Team Geek Solutions (TGS) is a global technology partner based in Texas, specializing in AI and Generative AI solutions, custom software development, and talent optimization. TGS offers a range of services tailored to industries like BFSI, Telecom, FinTech, Healthcare, and Manufacturing. With expertise in AI/ML development, cloud migration, software development, and more, TGS helps businesses achieve operational efficiency and drive innovation. Position Description We are looking for a Data Analyst to analyze large amounts of raw information to find patterns that will help improve our products. We will rely on you to build data models to extract valuable business insights. In this role, you should be highly analytical with a knack for analysis, math and statistics. Your task is to gather and prepare data from multiple sources, run statistical analyses, and communicate your findings in a clear and objective way. Your goal will be to help our company analyze trends to make better decisions. Qualifications/Skills Required 2+years’ experience in Python, with knowledge of packages such as pandas, NumPy, SciPy, Scikit-learn, Flask Proficiency in at least one data visualization tool, such as Matplotlib, Seaborn and Plotly Experience with popular statistical and machine learning techniques, such as clustering, SVM, KNN, decision trees, etc. Experience with databases, such as SQL and MongoDB Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy Knowledge of python libraries OpenCVand TensorFlow is a plus Job Responsibilities /Essential Functions Identify, analyze, and interpret trends or patterns in complex data sets Explore and visualize data Use machine learning tools to select features, create and optimize classifiers Clearly communicate the findings from the analysis to turn information into something actionable through reports, dashboards, and/or presentations. Skills: pandas,business insights,numpy,plotly,scipy,python,mongodb,analytical skills,tensorflow,statistics,machine learning,sql,data,opencv,seaborn,data visualization,matplotlib,scikit-learn,flask Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Hi, We have an opening for Senior QA Engineer role: - 5 days from office Strong knowledge and hands-on experience with MS SQL/Oracle Hands-on experience with writing python scripts for API testing and data testing - comfortable with using data science libraries such as pandas, numpy, scipy, etc. Interested candidates please revert back with updated CV. Below is the JD: - Job Purpose ICE Data Services India Private Limited., a subsidiary of Intercontinental Exchange, Inc. is seeking a passionate Senior QA Engineer to join our Quality Assurance team in Hyderabad, India. The candidate will work closely internally with Business Analysts, End-Users and Developers to facilitate and understand requirements and impact of changes to assist debugging and enhancing ICE Data Service applications. Responsibilities Review functional requirements to assess their impact on the software applications and formulate tests cases from them. Write concise, complete, well organized bug reports, test cases, and status reports. Analyze product requirements and ensure the testing is aligned with a risk-based test approach, mitigating risk exposure within all phases of testing. Participate in analyzing root causes of problems found and assist developers with countermeasures to remove the causes Evaluate and recommend enhancements for the product under test Create detailed, comprehensive and well-structured test plans and test cases Estimate, prioritize, plan and coordinate testing activities. Demonstrate exceptional interpersonal and communication skills and confidence to work with senior stakeholders from development and product Evaluate the effectiveness and efficiency of QA methods and procedures used and undertake improvement projects to improve QA effectiveness and efficiency Provide release support during production software deployment. A "can do" attitude and enjoys working within a highly collaborative work environment. Knowledge and Experience At least 7+ years of experience in the field of Software Quality Assurance Good understanding of Quality Assurance concepts, practices and tools Strong knowledge and hands-on experience with MS SQL/Oracle Hands-on experience with writing python scripts for API testing and data testing - comfortable with using data science libraries such as pandas, numpy, scipy, etc. Attention to detail and ability to work on multiple projects at the same time Highly motivated team player, with very strong analytical, detail-oriented, organized, diagnostic, and debugging skills Excellent interpersonal, verbal and written skills. Self-starter, energetic, ability to prioritize workload and work with minimal supervision Experience with mainstream defect tracking tools and test management tools Desired Knowledge and Experience Experience in the Financial Industry (experience with Fixed Income products is preferred) Experience with UNIX / LINUX systems Performance testing using JMETER or similar tool Experience with code version systems like Git List of preferred degree(s), license(s), and/or certification(s) B.S./B.Tech in Computer Science, Electrical Engineering, Math or equivalent Show more Show less

Posted 1 month ago

Apply

10.0 - 13.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Title : Principal Data scientist Location: Chennai or Bangalore Experience: 10-13 years Job Summary We are seeking a highly skilled and techno-functional Optimization Specialist with 10–13 years of experience in developing enterprise-grade optimization solutions and platforms. The ideal candidate will possess deep expertise in mathematical optimization, strong hands-on Python programming skills, and the ability to bridge the gap between technical and business teams. You will lead the design and deployment of scalable optimization engines to solve complex business problems across supply chain, manufacturing, pricing, logistics, and workforce planning. Key Responsibilities Design & Development : Architect and implement optimization models (LP, MILP, CP, metaheuristics) using solvers like Gurobi, CPLEX, or open-source equivalents. Platform Building : Lead the design and development of optimization-as-a-service platforms with modular, reusable architecture. Techno-Functional Role : Translate business requirements into formal optimization problems and provide functional consulting support across domains. End-to-End Ownership : Manage the full lifecycle from problem formulation, model design, data pipeline integration, to production deployment. Python Expertise : Build robust, production-grade code with modular design using Python, Pandas, NumPy, Pyomo/Pulp, and APIs (FastAPI/Flask). Collaboration : Work with business stakeholders, data scientists, and software engineers to ensure solutions are accurate, scalable, and aligned with objectives. Performance Tuning : Continuously improve model runtime and performance; conduct sensitivity analysis and scenario modeling. Innovation : Stay abreast of the latest in optimization techniques, frameworks, and tools; proactively suggest enhancements. Required Skills & Qualifications Bachelor’s or Master’s in Operations Research, Industrial Engineering, Computer Science, or related fields. 10–12 years of experience in solving real-world optimization problems. Deep understanding of mathematical programming (LP/MILP/CP), heuristics/metaheuristics, and stochastic modeling. Proficiency in Python and experience with relevant libraries (Pyomo, Pulp, OR-Tools, SciPy). Strong experience building end-to-end platforms or optimization engines deployed in production. Functional understanding of at least one domain: supply chain, logistics, manufacturing, pricing, scheduling, or workforce planning. Excellent communication skills – able to interact with technical and business teams effectively. Experience integrating optimization models into enterprise systems (APIs, cloud deployment, etc.). Preferred Qualifications Exposure to cloud platforms (AWS, GCP, Azure) and MLOps pipelines. Familiarity with data visualization (Dash, Plotly, Streamlit) to present optimization results. Certification or training in operations research or mathematical optimization tools. Show more Show less

Posted 1 month ago

Apply

0.0 years

0 Lacs

Thiruvananthapuram, Kerala

On-site

Data Science and AI Developer **Job Description:** We are seeking a highly skilled and motivated Data Science and AI Developer to join our dynamic team. As a Data Science and AI Developer, you will be responsible for leveraging cutting-edge technologies to develop innovative solutions that drive business insights and enhance decision-making processes. **Key Responsibilities:** 1. Develop and deploy machine learning models for predictive analytics, classification, clustering, and anomaly detection. 2. Design and implement algorithms for data mining, pattern recognition, and natural language processing. 3. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. 4. Utilize advanced statistical techniques to analyze complex datasets and extract actionable insights. 5. Implement scalable data pipelines for data ingestion, preprocessing, feature engineering, and model training. 6. Stay updated with the latest advancements in data science, machine learning, and artificial intelligence research. 7. Optimize model performance and scalability through experimentation and iteration. 8. Communicate findings and results to stakeholders through reports, presentations, and visualizations. 9. Ensure compliance with data privacy regulations and best practices in data handling and security. 10. Mentor junior team members and provide technical guidance and support. **Requirements:** 1. Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. 2. Proven experience in developing and deploying machine learning models in production environments. 3. Proficiency in programming languages such as Python, R, or Scala, with strong software engineering skills. 4. Hands-on experience with machine learning libraries/frameworks such as TensorFlow, PyTorch, Scikit-learn, or Spark MLlib. 5. Solid understanding of data structures, algorithms, and computer science fundamentals. 6. Excellent problem-solving skills and the ability to think creatively to overcome challenges. 7. Strong communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. 8. Certification in Data Science, Machine Learning, or Artificial Intelligence (e.g., Coursera, edX, Udacity, etc.). 9. Experience with cloud platforms such as AWS, Azure, or Google Cloud is a plus. 10. Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) is an advantage. Data Manipulation and Analysis : NumPy, Pandas Data Visualization : Matplotlib, Seaborn, Power BI Machine Learning Libraries : Scikit-learn, TensorFlow, Keras Statistical Analysis : SciPy Web Scrapping : Scrapy IDE : PyCharm, Google Colab HTML/CSS/JavaScript/React JS Proficiency in these core web development technologies is a must. Python Django Expertise: In-depth knowledge of e-commerce functionalities or deep Python Django knowledge. Theming: Proven experience in designing and implementing custom themes for Python websites. Responsive Design: Strong understanding of responsive design principles and the ability to create visually appealing and user-friendly interfaces for various devices. Problem Solving: Excellent problem-solving skills with the ability to troubleshoot and resolve issues independently. Collaboration: Ability to work closely with cross-functional teams, including marketing and design, to bring creative visions to life. interns must know about how to connect front end with datascience Also must Know to connect datascience to frontend **Benefits:** - Competitive salary package - Flexible working hours - Opportunities for career growth and professional development - Dynamic and innovative work environment Job Type: Full-time Pay: ₹8,000.00 - ₹12,000.00 per month Schedule: Day shift Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person

Posted 1 month ago

Apply

0.0 - 2.0 years

2 - 4 Lacs

Bengaluru

Work from Office

Job posting may be removed earlier if the position is filled or if a sufficient number of applications are received. Meet the Team We are a dynamic and innovative team of Data Engineers, Data Architects, and Data Scientists based in Bangalore, India. Our mission is to harness the power of data to provide actionable insights that empower executives to make informed, data-driven decisions. By analyzing and interpreting complex datasets, we enable the organization to understand the health of the business and identify opportunities for growth and improvement. Your Impact We are seeking a highly experienced and skilled Senior Data Scientist to join our dynamic team. The ideal candidate will possess deep expertise in machine learning models, artificial intelligence (AI), generative AI, and data visualization. Proficiency in Tableau and other visualization tools is essential. This role requires hands-on experience with databases such as Snowflake and Teradata, as well as advanced knowledge in various data science and AI techniques. The successful candidate will play a pivotal role in driving data-driven decision-making and innovation within our organization. Key Responsibilities Design, develop, and implement advanced machine learning models to solve complex business problems. Apply AI techniques and generative AI models to enhance data analysis and predictive capabilities. Utilize Tableau and other visualization tools to create insightful and actionable dashboards for stakeholders. Manage and optimize large datasets using Snowflake and Teradata databases. Collaborate with cross-functional teams to understand business needs and translate them into analytical solutions. Stay updated with the latest advancements in data science, machine learning, and AI technologies. Mentor and guide junior data scientists, fostering a culture of continuous learning and development. Communicate complex analytical concepts and results to non-technical stakeholders effectively. Key Technologies & Skills: Machine Learning ModelsSupervised learning, unsupervised learning, reinforcement learning, deep learning, neural networks, decision trees, random forests, support vector machines (SVM), clustering algorithms, etc. AI TechniquesNatural language processing (NLP), computer vision, generative adversarial networks (GANs), transfer learning, etc. Visualization ToolsTableau, Power BI, Matplotlib, Seaborn, Plotly, etc. DatabasesSnowflake, Teradata, SQL, NoSQL databases. Programming LanguagesPython (essential), R, SQL. Python LibrariesTensorFlow, PyTorch, scikit-learn, pandas, NumPy, Keras, SciPy, etc. Data ProcessingETL processes, data warehousing, data lakes. Cloud PlatformsAWS, Azure, Google Cloud Platform. Minimum Qualifications Bachelor's or Master's degree in Computer Science, Statistics, Mathematics, Data Science, or a related field. Minimum of [X] years of experience as a Data Scientist or in a similar role. Proven track record in developing and deploying machine learning models and AI solutions. Strong expertise in data visualization tools, particularly Tableau. Extensive experience with Snowflake and Teradata databases. Excellent problem-solving skills and the ability to work independently and collaboratively. Exceptional communication skills with the ability to convey complex information clearly. Preferred Qualifications (Provide up to five (5) bullet points these can include soft skills) Excellent communication and collaboration skills to work effectively in cross-functional teams. Ability to translate business requirements into technical solutions. Strong problem-solving skills and the ability to work with complex datasets. Experience in statistical analysis and machine learning techniques. Understanding of business domains such as sales, financials, marketing, and telemetry.

Posted 1 month ago

Apply

5.0 - 9.0 years

9 - 13 Lacs

Gurugram

Work from Office

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow.Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Your role As a Senior Data Scientist, you are expected to develop and implement Artificial Intelligence based solutions across various disciplines for the Intelligent Industry vertical of Capgemini Invent. You are expected to work as an individual contributor or along with a team to help design and develop ML/NLP models as per the requirement. You will work closely with the Product Owner, Systems Architect and other key stakeholders right from conceptualization till the implementation of the project. You should take ownership while understanding the client requirement, the data to be used, security & privacy needs and the infrastructure to be used for the development and implementation. The candidate will be responsible for executing data science projects independently to deliver business outcomes and is expected to demonstrate domain expertise, develop, and execute program plans and proactively solicit feedback from stakeholders to identify improvement actions. This role requires a strong technical background, excellent problem-solving skills, and the ability to work collaboratively with stakeholders from different functional and business teams. The role also requires the candidate to collaborate on ML asset creation and eager to learn and impart trainings to fellow data science professionals. We expect thought leadership from the candidate, especially on proposing to build a ML/NLP asset based on expected industry requirements. Experience in building Industry specific (e.g. Manufacturing, R&D, Supply Chain, Life Sciences etc), production ready AI Models using microservices and web-services is a plus. Programming Languages Python NumPy, SciPy, Pandas, MatPlotLib, Seaborne Databases RDBMS (MySQL, Oracle etc.), NoSQL Stores (HBase, Cassandra etc.) ML/DL Frameworks SciKitLearn, TensorFlow (Keras), PyTorch, Big data ML Frameworks - Spark (Spark-ML, Graph-X), H2O Cloud Azure/AWS/GCP Your Profile Predictive and Prescriptive modelling using Statistical and Machine Learning algorithms including but not limited to Time Series, Regression, Trees, Ensembles, Neural-Nets (Deep & Shallow CNN, LSTM, Transformers etc.). Experience with open-source OCR engines like Tesseract, Speech recognition, Computer Vision, face recognition, emotion detection etc. is a plus. Unsupervised learning Market Basket Analysis, Collaborative Filtering, Dimensionality Reduction, good understanding of common matrix decomposition approaches like SVD. Various Clustering approaches Hierarchical, Centroid-based, Density-based, Distribution-based, Graph-based clustering like Spectral. NLP Information Extraction, Similarity Matching, Sentiment Analysis, Text Clustering, Semantic Analysis, Document Summarization, Context Mapping/Understanding, Intent Classification, Word Embeddings, Vector Space Models, experience with libraries like NLTK, Spacy, Stanford Core-NLP is a plus. Usage of Transformers for NLP and experience with LLMs like (ChatGPT, Llama) and usage of RAGs (vector stores like LangChain & LangGraps), building Agentic AI applications. Model Deployment ML pipeline formation, data security and scrutiny check and ML-Ops for productionizing a built model on-premises and on cloud. Required Qualifications Masters degree in a quantitative field such as Mathematics, Statistics, Machine Learning, Computer Science or Engineering or a bachelors degree with relevant experience. Good experience in programming with languages such as Python/Java/Scala, SQL and experience with data visualization tools like Tableau or Power BI. Preferred Experience Experienced in Agile way of working, manage team effort and track through JIRA Experience in Proposal, RFP, RFQ and pitch creations and delivery to the big forum. Experience in POC, MVP, PoV and assets creations with innovative use cases Experience working in a consulting environment is highly desirable. Presupposition High Impact client communication The job may also entail sitting as well as working at a computer for extended periods of time. Candidates should be able to effectively communicate by telephone, email, and face to face. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI.

Posted 1 month ago

Apply

2.0 - 3.0 years

5 - 9 Lacs

Kochi

Work from Office

Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture) Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 1 month ago

Apply

2.0 - 3.0 years

5 - 9 Lacs

Kochi

Work from Office

Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 1 month ago

Apply

3.0 - 4.0 years

5 - 9 Lacs

Kochi

Work from Office

Job Title - + + Management Level : Location:Kochi, Coimbatore, Trivandrum Must have skills:Python, Pyspark Good to have skills:Redshift Job Summary : We are seeking a highly skilled and experienced Senior Data Engineer to join our growing Data and Analytics team. The ideal candidate will have deep expertise in Databricks and cloud data warehousing, with a proven track record of designing and building scalable data pipelines, optimizing data architectures, and enabling robust analytics capabilities. This role involves working collaboratively with cross-functional teams to ensure the organization leverages data as a strategic asset. Your responsibilities will include: Roles & Responsibilities Design, build, and maintain scalable data pipelines and ETL processes using Databricks and other modern tools. Architect, implement, and manage cloud-based data warehousing solutions on Databricks (Lakehouse Architecture) Develop and maintain optimized data lake architectures to support advanced analytics and machine learning use cases. Collaborate with stakeholders to gather requirements, design solutions, and ensure high-quality data delivery. Optimize data pipelines for performance and cost efficiency. Implement and enforce best practices for data governance, access control, security, and compliance in the cloud. Monitor and troubleshoot data pipelines to ensure reliability and accuracy. Lead and mentor junior engineers, fostering a culture of continuous learning and innovation. Excellent communication skills Ability to work independently and along with client based out of western Europe Professional & Technical Skills: Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 3-4 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:5-8 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Automotive ECU Software Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Algorithm/Data Analytics Engineer, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the application development process and ensuring successful project delivery. Roles & Responsibilities:Work closely with Cross functional product owners to understand user needs and design algorithms that enable new features or improve existing functionality. Continuously monitor the performance of algorithms and implement optimization techniques. Collaborate with engineers and other stakeholders to understand system requirements, define algorithm specifications, and conduct performance evaluations. Participate in code reviews and provide constructive feedback to ensure code quality and adherence to best practices. Document algorithms clearly and concisely, including design rationale, assumptions, and limitations. Professional & Technical Skills: Minimum of 7 years of experience in software as a service, preferably in the automotive industry. Expertise in python programming language with data science libraries (NumPy, SciPy, Pandas etc.)Understanding of AWS cloud platform and how to design algorithms that work efficiently in cloud environments. Ability to measure the efficiency and performance of algorithms using metrics such as latency, throughput, and resource utilization. Familiarity with testing methodologies to validate algorithm performance in real-world conditions. Additional Information:The candidate should have a minimum of 5 years of experience in automotive industry.This position is based at our Hyderabad office.A 15 years full time education is required Qualification 15 years full time education

Posted 1 month ago

Apply

2.0 - 3.0 years

4 - 8 Lacs

Kochi

Work from Office

Job Title - Data Engineer Sr.Analyst ACS SONG Management Level:Level 10 Sr. Analyst Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture (do not remove the hyperlink) Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 1 month ago

Apply

0.0 - 5.0 years

5 - 9 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Write effective, scalable code Develop back-end components to improve responsiveness and overall performance Integrate user-facing elements into applications Test and debug programs Improve functionality of existing systems Required Candidate profile Expertise in at least one popular Python framework (like Django, Flask or Pyramid) Familiarity with front-end technologies (like JavaScript and HTML5) Team spirit Good problem-solving skills Perks and benefits Free meals and snacks. Bonus. Vision insurance.

Posted 1 month ago

Apply

0.0 - 5.0 years

5 - 9 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Write effective, scalable code Develop back-end components to improve responsiveness and overall performance Integrate user-facing elements into applications Test and debug programs Improve functionality of existing systems Required Candidate profile Expertise in at least one popular Python framework (like Django, Flask or Pyramid) Familiarity with front-end technologies (like JavaScript and HTML5) Team spirit Good problem-solving skills Perks and benefits Free meals and snacks. Bonus. Vision insurance.

Posted 1 month ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Job Purpose ICE Data Services India Private Limited., a subsidiary of Intercontinental Exchange, Inc. is seeking a passionate Senior QA Engineer to join our Quality Assurance team in Hyderabad, India. The candidate will work closely internally with Business Analysts, End-Users and Developers to facilitate and understand requirements and impact of changes to assist debugging and enhancing ICE Data Service applications. Responsibilities Review functional requirements to assess their impact on the software applications and formulate tests cases from them. Write concise, complete, well organized bug reports, test cases, and status reports. Analyze product requirements and ensure the testing is aligned with a risk-based test approach, mitigating risk exposure within all phases of testing. Participate in analyzing root causes of problems found and assist developers with countermeasures to remove the causes Evaluate and recommend enhancements for the product under test Create detailed, comprehensive and well-structured test plans and test cases Estimate, prioritize, plan and coordinate testing activities. Demonstrate exceptional interpersonal and communication skills and confidence to work with senior stakeholders from development and product Evaluate the effectiveness and efficiency of QA methods and procedures used and undertake improvement projects to improve QA effectiveness and efficiency Provide release support during production software deployment. A "can do" attitude and enjoys working within a highly collaborative work environment. Knowledge And Experience At least 7+ years of experience in the field of Software Quality Assurance Good understanding of Quality Assurance concepts, practices and tools Strong knowledge and hands-on experience with MS SQL/Oracle Hands-on experience with writing python scripts for API testing and data testing - comfortable with using data science libraries such as pandas, numpy, scipy, etc. Attention to detail and ability to work on multiple projects at the same time Highly motivated team player, with very strong analytical, detail-oriented, organized, diagnostic, and debugging skills Excellent interpersonal, verbal and written skills. Self-starter, energetic, ability to prioritize workload and work with minimal supervision Experience with mainstream defect tracking tools and test management tools Desired Knowledge And Experience Experience in the Financial Industry (experience with Fixed Income products is preferred) Experience with UNIX / LINUX systems Performance testing using JMETER or similar tool Experience with code version systems like Git List Of Preferred Degree(s), License(s), And/or Certification(s) B.S./B.Tech in Computer Science, Electrical Engineering, Math or equivalent Show more Show less

Posted 1 month ago

Apply

7.0 years

10 - 11 Lacs

Hyderābād

On-site

Hyderabad, India Technology In-Office 10530 Job Description Job Purpose ICE Data Services India Private Limited., a subsidiary of Intercontinental Exchange, Inc. is seeking a passionate Senior QA Engineer to join our Quality Assurance team in Hyderabad, India. The candidate will work closely internally with Business Analysts, End-Users and Developers to facilitate and understand requirements and impact of changes to assist debugging and enhancing ICE Data Service applications. Responsibilities Review functional requirements to assess their impact on the software applications and formulate tests cases from them. Write concise, complete, well organized bug reports, test cases, and status reports. Analyze product requirements and ensure the testing is aligned with a risk-based test approach, mitigating risk exposure within all phases of testing. Participate in analyzing root causes of problems found and assist developers with countermeasures to remove the causes Evaluate and recommend enhancements for the product under test Create detailed, comprehensive and well-structured test plans and test cases Estimate, prioritize, plan and coordinate testing activities. Demonstrate exceptional interpersonal and communication skills and confidence to work with senior stakeholders from development and product Evaluate the effectiveness and efficiency of QA methods and procedures used and undertake improvement projects to improve QA effectiveness and efficiency Provide release support during production software deployment. A "can do" attitude and enjoys working within a highly collaborative work environment. Knowledge and Experience At least 7+ years of experience in the field of Software Quality Assurance Good understanding of Quality Assurance concepts, practices and tools Strong knowledge and hands-on experience with MS SQL/Oracle Hands-on experience with writing python scripts for API testing and data testing - comfortable with using data science libraries such as pandas, numpy, scipy, etc. Attention to detail and ability to work on multiple projects at the same time Highly motivated team player, with very strong analytical, detail-oriented, organized, diagnostic, and debugging skills Excellent interpersonal, verbal and written skills. Self-starter, energetic, ability to prioritize workload and work with minimal supervision Experience with mainstream defect tracking tools and test management tools Desired Knowledge and Experience Experience in the Financial Industry (experience with Fixed Income products is preferred) Experience with UNIX / LINUX systems Performance testing using JMETER or similar tool Experience with code version systems like Git List of preferred degree(s), license(s), and/or certification(s) B.S./B.Tech in Computer Science, Electrical Engineering, Math or equivalent

Posted 1 month ago

Apply

0 years

0 - 0 Lacs

Cochin

On-site

Job Title: Faculty Member - Python, Data Science, and Artificial IntelligenceDepartment : Computer Science / Data Science / Artificial Intelligence Location :Kochi Position Type : Full-time, Position Overview We are seeking a highly qualified and motivated individual for a faculty position in Python programming, Data Science, and Artificial Intelligence. The successful candidate will contribute to the academic growth of our students, engage in cutting-edge research in AI, machine learning, and data science, and participate in collaborative projects across various disciplines. This position offers an exciting opportunity to shape the future of technology education while contributing to groundbreaking research in the fields of Data Science and Artificial Intelligence. Key Responsibilities Teaching & Curriculum Development Develop and teach undergraduate and graduate-level courses in Python programming, Data Science, and AI. Design and update course materials, including syllabi, lectures, assignments, and assessments. Provide hands-on learning opportunities and practical application of Python, machine learning, and AI techniques. Supervise and mentor students in their academic projects and research activities. should be willing to take over time and should have good English knowledge. Advising & Mentorship Advise graduate and undergraduate students on research topics, theses, and projects. Provide mentorship in career development, guiding students into relevant industry or academic roles. Professional Development & Service Stay current with advancements in Data Science, AI, and Python development. Participate in departmental meetings, committees, and faculty activities. Contribute to university-wide initiatives and outreach programs. Attend and present at conferences, workshops, and seminars. Qualifications: BE OR ME . in Computer Science, Data Science, Artificial Intelligence, or a closely related field . Strong proficiency in Python programming and experience with Python libraries for Data Science (e.g., Pandas, NumPy, SciPy, Matplotlib). Expertise in Machine Learning , Deep Learning , Natural Language Processing (NLP) , or Computer Vision . Demonstrated ability to teach at the undergraduate and graduate level in Data Science and AI. A proven track record of research in AI, Data Science, or related fields. Strong communication skills and the ability to engage students in research and problem-solving. Preferred: Experience with version control systems (e.g., Git), data visualization tools, and databases. Prior experience in curriculum development and teaching online courses. Industry experience in Data Science, AI, or machine learning applications add on advantage. Application Instructions Interested candidates should submit the following: A cover letter that outlines their teaching philosophy, research interests, and career goals. A curriculum vitae (CV) with a list of publications and references. A statement of research interests, including potential projects and funding opportunities. Three professional references, including one who can speak to the applicant’s teaching capabilities. or share resume to contact :7994211184. Job Type: Full-time Pay: ₹10,000.00 - ₹25,000.00 per month Schedule: Day shift Morning shift Supplemental Pay: Overtime pay Performance bonus Language: English (Preferred) Work Location: In person

Posted 1 month ago

Apply

5.0 - 6.0 years

5 - 10 Lacs

India

On-site

Job Summary: We are seeking a highly skilled Python Developer to join our team. Key Responsibilities: Design, develop, and deploy Python applications Work independently on machine learning model development, evaluation, and optimization. Implement scalable and efficient algorithms for predictive analytics and automation. Optimize code for performance, scalability, and maintainability. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Integrate APIs and third-party tools to enhance functionality. Document processes, code, and best practices for maintainability. Required Skills & Qualifications: 5-6 years of professional experience in Python application development. Proficiency in Python libraries such as Pandas, NumPy, SciPy, and Matplotlib. Experience with SQL and NoSQL databases (PostgreSQL, MongoDB, etc.). Hands-on experience with big data technologies (Apache Spark, Delta Lake, Hadoop, etc.). Strong experience in developing APIs and microservices using FastAPI, Flask, or Django. Good understanding of data structures, algorithms, and software development best practices. Strong problem-solving and debugging skills. Ability to work independently and handle multiple projects simultaneously. Good to have - Working knowledge of cloud platforms (Azure/AWS/GCP) for deploying ML models and data applications. Job Type: Full-time Pay: ₹500,000.00 - ₹1,000,000.00 per year Schedule: Fixed shift Work Location: In person Application Deadline: 30/06/2025 Expected Start Date: 01/07/2025

Posted 1 month ago

Apply

3.0 - 8.0 years

20 - 22 Lacs

Noida, Gurugram

Work from Office

Python/Quant Engineer Key Responsibilities: Design, develop, and maintain scalable Python-based quantitative tools and libraries. Collaborate with quants and researchers to implement and optimize pricing, risk, and trading models. Process and analyze large datasets (market, fundamental, alternative data) to support research and live trading. Build and enhance backtesting frameworks and data pipelines. Integrate models with execution systems and trading platforms. Optimize code for performance and reliability in low-latency environments. Participate in code reviews, testing, and documentation efforts. Required Qualifications: 3-8 years of professional experience in quantitative development or similar roles. Proficiency in Python , including libraries like NumPy, Pandas, SciPy, Scikit-learn , and experience in object-oriented programming. Strong understanding of data structures, algorithms , and software engineering best practices. Experience working with large datasets, data ingestion, and real-time processing. Exposure to financial instruments (equities, futures, options, FX, fixed income, etc.) and financial mathematics. Familiarity with backtesting, simulation , and strategy evaluation tools. Experience with Git , Docker , CI/CD , and modern development workflows. Preferred Qualifications: Preferred Experience with C++ for performance-critical modules. Knowledge of machine learning techniques and tools (e.g., TensorFlow, XGBoost). Familiarity with SQL / NoSQL databases and cloud platforms (AWS, GCP). Prior experience in hedge funds, proprietary trading firms, investment banks, or financial data providers.

Posted 1 month ago

Apply

0 years

0 Lacs

New Delhi, Delhi, India

On-site

🚀 We’re Hiring: Electronics & Communication Engineers Focus: RF Data Analytics · Radar Signal Processing · Electronic Warfare | Experience: 1 – 5 yrs | Age Limit: ≤ 30 yrs Why join Crimson? Work on next-generation radar and EW programs that safeguard critical national assets. Turn terabytes of raw I/Q captures into real-time intelligence alongside cross-functional experts. Ship your code from lab prototype to live field deployment and see immediate impact. What you’ll do Acquire – Automate high-throughput downloads, cataloguing and integrity checks of multi-gigabyte RF datasets. Clean & Sanitize – Write Python/Matlab routines for noise filtering, interference rejection and metadata standardisation. Transform – Build DSP modules to demodulate, resample and convert raw I/Q streams into emitter-level feature vectors. Ingest – Design robust ETL workflows into local and shared SQL/NoSQL databases with geospatial indexing. Analyse – Produce geospatial heat-maps, time-frequency plots and anomaly alerts that drive mission decisions. Present – Craft dashboards and concise reports that translate complex RF metrics into clear operational insight. Maintain – Handle routine calibration of RF front-ends, firmware upgrades and Linux/GPU server upkeep. Must-have qualifications Degree: M.Tech / ME / B.Tech / BE / M.Sc. in ECE, Telecom, Signal Processing, Radar Tech, Defence Electronics, or MCA with strong tech focus. Experience: 1 – 5 yrs hands-on with electronics, communications or signal-processing systems. Core knowledge: Electronic Support Measures (ESM), radar theory, communication waveforms, RF chain components. Tools: Matlab (or equivalent), Python (NumPy, SciPy, Pandas, PyTorch/SciKit-DSP-Comm), Git, Docker, Linux. Data skills: Building ETL pipelines, designing database schemas and basic DevOps practices. Nice-to-have superpowers GNU Radio and SDRs (USRP, HackRF) or Keysight/NI test equipment. REST API development with FastAPI or Flask. Geospatial tooling (GDAL, PostGIS, QGIS, ArcGIS). Familiarity with MIL-STD metadata formats (ST 0601/0603, ASTERIX) and radar messaging. Defence-sector clearance eligibility and a passion for national-security tech. What we offer Mission impact: Direct contribution to nationally strategic programmes with tangible outcomes. Growth runway: Sponsored certifications (DSP, EW, cloud), conference travel and mentoring from senior defence scientists. Cutting-edge lab: Petabyte-scale RF archive, GPU clusters and dedicated SDR testbeds. Competitive package: Market-aligned salary, performance bonus, medical & accident insurance, 30 days paid leave. How to apply Prepare your CV (PDF) and a one-page cover letter describing an RF or large-scale data-pipeline project you’ve handled. Deadline: 11 June 2025 (rolling reviews — apply early for priority). Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Work Experience : 3+ years Salary: 21 LPA Location: Bengaluru Title : MLops Engineer Team Charter: The team in India comes with multi-disciplinary skillset, including but not limited to the following areas: Develop models and algorithms using Deep Learning and Computer Vision on the captured data to provide meaningful analysis to our customers. Some of the projects include – object detection, OCR, barcode scanning, stereovision, SLAM, 3D-reconstruction, action recognition etc. Develop integrated embedded systems for our drones – including embedded system platform development, camera and sensor integration, flight controller and motor control system development, etc. Architect and develop full stack software to interface between our solution and customer database and access – including database development, API development, UI/UX, storage, security and processing for data acquired by the drone. Integration and testing of various off the shelf sensors and other modules with drone and related software. Design algorithms related to autonomy and flight controls. Responsibilities: As a Machine Learning Ops (MLOps) engineer, you will be responsible for building and maintaining the next generation of Vimaan’s ML Platform and Infrastructure. MLOps will have a major contribution in making CV & ML offerings scalable across the company products. We are building all these data & model pipelines to scale Vimaan operations and MLOps Engineer will play a key role in enabling that. You will lead initiatives geared towards making the Computer Vision Engineers at Vimaan more productive. You will setup the infrastructure that powers the ML teams, thus simplifying the development and deployment cycles of ML models. You will help establish best practices for the ML pipeline and partner with other infrastructure ops teams to help champion them across the company. Build and maintain data pipelines - data ingestion, filtering, generating pre-populated annotations, etc. Build and maintain model pipelines - model monitoring, automated triggering of model (re)training, auto-deployment of models to producti on servers and edge devices. Own the cloud stack which comprises all ML resources. Establish standards and practices around MLOps, including governance, compliance, and data security. Collaborate on managing ML infrastructure costs. Qualifications: Deep quantitative/programming background with degree (Bachelors, Masters or Ph.D.) in a highly analytical discipline, like Statistics, Electrical,Electronics, Computer Science, Mathematics, Operations Research, etc. A minimum of 3 years of experience in managing machine learning projects end-to-end focused on MLOps. Experience with building RESTful APIs for monitoring build & production systems using automated monitoring of models and corresponding alarm tools. Experience with data versioning tools such as Data Version Control (DVC). Build and maintain data pipelines by using tools like Dagster, Airflow etc. Experience with containerizing and deploying ML models. Hands-on experience with autoML tools, experiment tracking, model management, version tracking & model training (MLflow, W&B, Neptune etc.), model hyperparameter optimization, model evaluation, and visualization (Tensorboard). Sound knowledge and experience with atleast one DL frameworks such as PyTorch, TensorFlow, Keras. Experience with container technologies (Docker, Kubernetes etc). Experience with cloud services. Working knowledge of SQL based databases. Hands on experience with Python scientific computing stack such as numpy, scipy, scikit-learn Familiarity with Linux and git. Detail oriented design, code debugging and problem-solving skills. Effective communication skills: discussing with peers and driving logic driven conclusions. Ability to perspicuously communicate complex technical/architectural problems and propose solutions for the same. How to stand out Prior experience in deploying ML & DL solutions as services Experience with multiple cloud services. Ability to collaborate effectively across functions in a fast-paced environment. Experience with technical documentation and presentation for effective dissemination of work. Engineering experience in distributed systems and data infrastructure. Show more Show less

Posted 1 month ago

Apply

5.0 - 10.0 years

30 - 37 Lacs

Pune

Remote

We are hiring Senior Machine Learning Engineer for an MNC Job Type : Direct , Fulltime role Location : PAN India (Remote) Senior Machine Learning Engineer Responsibilities Develop, implement, and maintain machine learning models using scikit-learn and SciPy. Build and deploy ML models for production use cases. Work with regression models and optimization techniques. Develop and integrate APIs using the Flask framework. Utilize GCP services (App Engine, Cloud Tasks, Dataflow, BigQuery, Bigtable, Vertex AI). Optimize CI/CD pipelines using Azure DevOps. Collaborate with cross-functional teams to deploy scalable ML solutions. Qualifications Strong proficiency in Python and core ML libraries Scikit-learn and SciPy. Hands-on experience with regression models and optimization techniques. Experience with Flask for API development and deployment. Proficiency in GCP services and ML operations on cloud infrastructure. Experience with CI/CD tools, especially Azure DevOps. Solid understanding of data structures, algorithms, and software engineering practices. Familiarity with Agile methodologies and technical documentation. Strong analytical, communication, and problem-solving skills.

Posted 1 month ago

Apply

4.0 - 9.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Job Area: Engineering Group, Engineering Group > Hardware Engineering General Summary: We are seeking a highly skilled Full Stack Python Developer to join our dynamic team. The ideal candidate should have a strong background in tool development, data science, and automation of complex tasks. You will be responsible for developing high volume regression dashboard, parametric and power tools and contributing to both front-end and back-end development. Minimum Qualifications: Bachelor's degree in Computer Science, Electrical/Electronics Engineering, Engineering, or related field and 4+ years of Hardware Engineering or related work experience. OR Master's degree in Computer Science, Electrical/Electronics Engineering, Engineering, or related field and 3+ years of Hardware Engineering or related work experience. OR PhD in Computer Science, Electrical/Electronics Engineering, Engineering, or related field and 2+ years of Hardware Engineering or related work experience. Technical Skills: Python Proficiency in Python programming, including libraries like Pandas, NumPy, and SciPy for data science. Full Stack Development Experience with both front-end (HTML, CSS, JavaScript, React, Vue.js) and back-end (Django, Flask) technologies. Tool Development Ability to develop parametric and power tools, possibly using frameworks like Vue.js , PyQt or Tkinter for GUI development. Data Science Strong understanding of data analysis, machine learning (using libraries like scikit-learn, TensorFlow), and data visualization (using Matplotlib, Seaborn). Automation Experience in automating complex tasks using scripting and tools like Selenium, Airflow, or custom automation scripts. Soft Skills: Problem-Solving Ability to tackle complex problems and develop innovative solutions. Communication Strong communication skills to effectively collaborate with team members and stakeholders. Adaptability Flexibility to adapt to new technologies and methodologies. Experience: Projects Previous experience in developing tools and automation solutions. Industry Knowledge Familiarity with the specific industry or domain you're working in can be a plus. Key Responsibilities: Develop and maintain parametric and power tools using Python. Design and implement automation solutions for complex tasks. Collaborate with data scientists to analyze and visualize data. Build and maintain web applications using Django or Flask. Develop front-end components using HTML, CSS, JavaScript, and React. Integrate third-party APIs and services. Optimize applications for maximum speed and scalability. Write clean, maintainable, and efficient code. Troubleshoot and debug applications. Stay updated with the latest industry trends and technologies. Preferred Qualifications: Bachelor's degree in Computer Science, Engineering, or related field. Previous experience in tool development and automation. Familiarity with industry-specific tools and technologies.

Posted 1 month ago

Apply

6.0 - 9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Python Developer - Sr. Consultant T he position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. As a Data Engineer, you will be an integral member of our Data & Analytics team responsible for design and development of pipelines using cutting edge technologies . Work you’ll do Implementation of security and data protection Implementation of ETL pipelines for data from a wide variety of data sources using Python and SQL Delivering data and insights in Realtime Participate in architectural, design, and product sessions. Unit testing and debugging skills Collaborate with other developers, testers, and system engineers to ensure quality of deliverables and any product enhancements. Qualifications Required: 6-9 Years of technology Consulting experience Education: Bachelors/ Master’s degree in Computer Science / MCA / M.Sc / MBA A minimum of 2 Years of experience into Unit testing and debugging skills Excellent knowledge of Python programming language along with knowledge of at least one Python web framework (Django , Flask , FastAPI , Pyramid ) Extensive experience in Pandas/ Numpy dataframes , slicing, data wrangling, aggregations. Lambda Functions, Decorators. Vector operations on Pandas dataframes /series. Application of applymap , apply, map functions. Understanding on using a framework based on specific needs and requirements. Understanding of the threading limitations of Python, and multi-process architecture Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 Primary Skills Python and data analysis libraries (Pandas, NumPy, SciPy). Django DS/Algo SQL (Read & Write) CRUD Awareness of Microservices Preferred: Good Understanding of fundamental design principles behind a scalable application Good Understanding of accessibility and security compliance Familiarity with event-driven programming in Python Proficient understanding of code versioning tools (Git , Mercurial or SVN) Knowledge of PowerShell and SQL Server You are familiar with big data technologies like Spark or Flink and comfortable working with web-scale datasets You have an eye for detail, good data intuition, and a passion for data quality G ood Knowledge of user authentication and authorization between multiple systems, servers, and environments You appreciate the importance of great documentation and data debugging skill Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300058 Show more Show less

Posted 1 month ago

Apply

3.0 - 6.0 years

0 Lacs

Greater Kolkata Area

On-site

Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Python Developer - Consultant T he position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. As a Data Engineer, you will be an integral member of our Data & Analytics team responsible for design and development of pipelines using cutting edge technologies . Work you’ll do Implementation of security and data protection Implementation of ETL pipelines for data from a wide variety of data sources using Python and SQL Delivering data and insights in Realtime Participate in architectural, design, and product sessions. Unit testing and debugging skills Collaborate with other developers, testers, and system engineers to ensure quality of deliverables and any product enhancements. Qualifications Required: 3-6 Years of technology Consulting experience Education: Bachelors/ Master’s degree in Computer Science / MCA / M.Sc / MBA A minimum of 2 Years of experience into Unit testing and debugging skills Excellent knowledge of Python programming language along with knowledge of at least one Python web framework (Django , Flask , FastAPI , Pyramid ) Extensive experience in Pandas/ Numpy dataframes , slicing, data wrangling, aggregations. Lambda Functions, Decorators. Vector operations on Pandas dataframes /series. Application of applymap , apply, map functions. Understanding on using a framework based on specific needs and requirements. Understanding of the threading limitations of Python, and multi-process architecture Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 Primary Skills Python and data analysis libraries (Pandas, NumPy, SciPy). Django DS/Algo SQL (Read & Write) CRUD Awareness of Microservices Preferred: Good Understanding of fundamental design principles behind a scalable application Good Understanding of accessibility and security compliance Familiarity with event-driven programming in Python Proficient understanding of code versioning tools (Git , Mercurial or SVN) Knowledge of PowerShell and SQL Server You are familiar with big data technologies like Spark or Flink and comfortable working with web-scale datasets You have an eye for detail, good data intuition, and a passion for data quality G ood Knowledge of user authentication and authorization between multiple systems, servers, and environments You appreciate the importance of great documentation and data debugging skill Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300054 Show more Show less

Posted 1 month ago

Apply

6.0 - 9.0 years

0 Lacs

Greater Kolkata Area

On-site

Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Python Developer - Sr. Consultant T he position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. As a Data Engineer, you will be an integral member of our Data & Analytics team responsible for design and development of pipelines using cutting edge technologies . Work you’ll do Implementation of security and data protection Implementation of ETL pipelines for data from a wide variety of data sources using Python and SQL Delivering data and insights in Realtime Participate in architectural, design, and product sessions. Unit testing and debugging skills Collaborate with other developers, testers, and system engineers to ensure quality of deliverables and any product enhancements. Qualifications Required: 6-9 Years of technology Consulting experience Education: Bachelors/ Master’s degree in Computer Science / MCA / M.Sc / MBA A minimum of 2 Years of experience into Unit testing and debugging skills Excellent knowledge of Python programming language along with knowledge of at least one Python web framework (Django , Flask , FastAPI , Pyramid ) Extensive experience in Pandas/ Numpy dataframes , slicing, data wrangling, aggregations. Lambda Functions, Decorators. Vector operations on Pandas dataframes /series. Application of applymap , apply, map functions. Understanding on using a framework based on specific needs and requirements. Understanding of the threading limitations of Python, and multi-process architecture Basic understanding of front-end technologies, such as JavaScript, HTML5, and CSS3 Primary Skills Python and data analysis libraries (Pandas, NumPy, SciPy). Django DS/Algo SQL (Read & Write) CRUD Awareness of Microservices Preferred: Good Understanding of fundamental design principles behind a scalable application Good Understanding of accessibility and security compliance Familiarity with event-driven programming in Python Proficient understanding of code versioning tools (Git , Mercurial or SVN) Knowledge of PowerShell and SQL Server You are familiar with big data technologies like Spark or Flink and comfortable working with web-scale datasets You have an eye for detail, good data intuition, and a passion for data quality G ood Knowledge of user authentication and authorization between multiple systems, servers, and environments You appreciate the importance of great documentation and data debugging skill Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300058 Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies