Jobs
Interviews

278 Pycharm Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

4 - 8 Lacs

Kolkata

Hybrid

Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark , Python , and working with modern data engineering tools in cloud environments such as AWS . Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments Skills: ci/cd,zeppelin,pycharm,pyspark,etl tools,control-m,unit test cases,tableau,performance tuning,jenkins,qlikview,informatica,jupyter notebook,api integration,unix/linux,git,aws s3,hive,cloudera,jasper,airflow,cdc,pyspark, apache spark, python, aws s3, airflow/control-m, sql, unix/linux, hive, hadoop, data modeling, and performance tuning,agile methodologies,aws,s3,data modeling,data validation,ai/ml model development,batch integration,apache spark,python,etl pipelines,shell scripting,hortonworks,real-time integration,hadoop

Posted 3 months ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Chennai

Hybrid

Work Mode: Hybrid Interview Mode: Virtual (2 Rounds) Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark , Python , and working with modern data engineering tools in cloud environments such as AWS . Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments Skills: ci / cd , zeppelin , pycharm , pyspark , etl tools,control-m,unit test cases,tableau,performance tuning , jenkins , qlikview , informatica , jupyter notebook,api integration,unix/linux,git,aws s3 , hive , cloudera , jasper , airflow , cdc , pyspark , apache spark, python, aws s3, airflow/control-m, sql, unix/linux, hive, hadoop, data modeling, and performance tuning,agile methodologies,aws,s3,data modeling,data validation,ai/ml model development,batch integration,apache spark,python,etl pipelines,shell scripting,hortonworks,real-time integration,hadoop

Posted 3 months ago

Apply

5.0 - 10.0 years

6 - 16 Lacs

Bengaluru

Hybrid

Job Description: Python Automation Testing Duties and Responsibilities: l Review technical software specifications in order to create corresponding manual or automated test cases. l Responsible for implementation and execution of testing in accordance with test plan specifications. l Submit detailed bug reports that provide enough debugging information for the development teams to quickly resolve all product defects. l Work closely with the development teams to test code changes for finding bugs early. l Assist in the development of tools which will be used to improve our test automation infrastructure. Requirements l Strong knowledge in Python . l Hands on experience in Linux. l Should be very strong in understanding the test case and automate the test steps adhering to the framework and development practices. l Ability to write scripts and tools for development and debugging. l Seeking a skilled Python Automation Engineer with hands-on experience in Selenium and API automation l Proficiency in Object-Oriented Programming is a must. l Additionally, familiarity with Linux. l Should demonstrate self-drive, Effective communication and proactive follow-up and be able to work in a fast paced environment where requirements keep on evolving.

Posted 3 months ago

Apply

5.0 - 10.0 years

5 - 15 Lacs

Bengaluru

Hybrid

Job Description: Python Automation Testing Duties and Responsibilities: l Review technical software specifications in order to create corresponding manual or automated test cases. l Responsible for implementation and execution of testing in accordance with test plan specifications. l Submit detailed bug reports that provide enough debugging information for the development teams to quickly resolve all product defects. l Work closely with the development teams to test code changes for finding bugs early. l Assist in the development of tools which will be used to improve our test automation infrastructure. Requirements l Strong knowledge in Python . l Hands on experience in Linux. l Should be very strong in understanding the test case and automate the test steps adhering to the framework and development practices. l Ability to write scripts and tools for development and debugging. l Seeking a skilled Python Automation Engineer with hands-on experience in Selenium and API automation l Proficiency in Object-Oriented Programming is a must. l Additionally, familiarity with Linux. l Should demonstrate self-drive, Effective communication and proactive follow-up and be able to work in a fast paced environment where requirements keep on evolving.

Posted 3 months ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Senior Engineer Location: Bangalore, LTTS India L&T Technology Services is seeking a Senior Engineer (Experience range - 6+ years) of experience, proficient in: Good Hands-on Python programming Hands on experience in Python with knowledge of at least one Python framework. Experience using OOP in Python Good experience with Unit testing and mocking frameworks like Pytest and TDD is must. Hands on with REST JSON API development Database experience in MySQL. Working experience in Azure IOT & Cloud services Experience in GIT, Jenkins, or such build automation tools. Worked with IDE Pycharm. Good exposure to Agile/Scrum methodology Experience of working in Agile Development Team EXPERTISE AND QUALIFICATIONS Python IOT, Azure #AzureFunctions, #Azure, #AzureAppService, #AzureStorage, #AzureIOT, #AzureDataFactory #AzureServices, #Python, #AzureSDK Show more Show less

Posted 3 months ago

Apply

7.0 - 10.0 years

45 - 50 Lacs

Noida, Kolkata, Chennai

Work from Office

Dear Candidate, We are hiring a Python Developer to build scalable backend systems, data pipelines, and automation tools. This role requires strong expertise in Python frameworks and a deep understanding of software engineering principles. Key Responsibilities: Develop backend services, APIs, and automation scripts using Python. Work with frameworks like Django, Flask, or FastAPI. Collaborate with DevOps and data teams for end-to-end solution delivery. Write clean, testable, and efficient code. Troubleshoot and debug applications in production environments. Required Skills & Qualifications: Proficient in Python 3.x , OOP, and design patterns Experience with Django, Flask, FastAPI, Celery Knowledge of REST APIs, SQL/NoSQL databases (PostgreSQL, MongoDB) Familiar with Docker, Git, CI/CD, and cloud platforms (AWS/GCP/Azure) Experience in data processing, scripting, or automation is a plus Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 3 months ago

Apply

0 years

0 Lacs

Kalaburagi, Karnataka, India

On-site

Responsibilities Ability to write clean, maintainable, and robust code in Python Understanding and expertise of software engineering concepts and best practices Knowledge of testing frameworks and libraries Experience with analytics (descriptive, predictive, EDA), feature engineer, algorithms, anomaly detection, data quality assessment and python visualization libraries - e.g. matplotlib, seaborn or other Comfortable with notebook and source code development - Jupyter, Pycharm/VScode Hands-on experience of technologies like Python, Spark/Pyspark, Hadoop/MapReduce/HIVE, Pandas etc. Familiarity with query languages and database technologies, CI/CD, testing and validation of data and software Tech stack and activities that you would use and preform on a daily basis : Python Spark (PySpark) Jupyter SQL and No-SQL DBMS Git (as source code versioning and CI/CD) Exploratory Data Analysis (EDA) Imputation Techniques Data Linking / Cleansing Feature Engineering Apache Airflow/ Jenkins scheduling and automation, Github and Github Actions Collaborative - able to build strong relations that enable robust debate, and resolve periodic disagreements regarding priorities. Excellent interpersonal, and communication skills Ability to communicate effectively with technical and non-technical audience Ability to work under pressure with a solid sense for setting priorities Ability to lead technical work with strong sense of ownership Strong command of English language (both verbal and written) Practical and action oriented Compelling communicator Excellent stakeholder management Foster and promote entrepreneurial spirit and curiosity amongst team members Team player Quick learner (ref:hirist.tech) Show more Show less

Posted 3 months ago

Apply

6.0 - 8.0 years

5 - 8 Lacs

Mumbai

Hybrid

Work Mode: Hybrid Interview Mode: Virtual (2 Rounds) Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark , Python , and working with modern data engineering tools in cloud environments such as AWS . Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments Skills: ci/cd,zeppelin,pycharm,pyspark,etl tools,control-m,unit test cases,tableau,performance tuning,jenkins,qlikview,informatica,jupyter notebook,api integration,unix/linux,git,aws s3,hive,cloudera,jasper,airflow,cdc,pyspark, apache spark, python, aws s3, airflow/control-m, sql, unix/linux, hive, hadoop, data modeling, and performance tuning,agile methodologies,aws,s3,data modeling,data validation,ai/ml model development,batch integration,apache spark,python,etl pipelines,shell scripting,hortonworks,real-time integration,hadoop

Posted 3 months ago

Apply

5.0 - 8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description : Develop and customize Odoo modules as per business requirements. Implement and integrate Odoo with third-party applications. Maintain and customize existing Odoo modules. Create & customize reports. Troubleshoot and resolve issues related to Odoo modules and integrations. Setup, maintain & monitor Odoo servers. Test new functions/modifications to existing application modules in accordance with application support. Requirements : At least 5 to 8 years of experience in Odoo development. Strong knowledge of Python programming language. Experience with Team Handling. Strong understanding of Odoo architecture and framework. Experience in Python, Javascript, XML, SQL. Experience in using VSCode, PyCharm IDEs. Should have Python experience as well as a solid understanding of Object-Oriented Design & programming. Experience with API (Rest API's and SOAP API's) and integration with Odoo applications. Requirements : At least 5-8 years of experience in Odoo development. Strong knowledge of Python programming language. Experience with Team Handling. Strong understanding of Odoo architecture and framework. Experience in Python, Javascript, XML, SQL. Experience in using VSCode, PyCharm IDEs. Should have Python experience as well as a solid understanding of Object-Oriented Design & programming. Experience with API (Rest API's and SOAP API's) and integration with Odoo applications. (ref:hirist.tech) Show more Show less

Posted 3 months ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

As a Data Engineer , you are required to: Design, build, and maintain data pipelines that efficiently process and transport data from various sources to storage systems or processing environments while ensuring data integrity, consistency, and accuracy across the entire data pipeline. Integrate data from different systems, often involving data cleaning, transformation (ETL), and validation. Design the structure of databases and data storage systems, including the design of schemas, tables, and relationships between datasets to enable efficient querying. Work closely with data scientists, analysts, and other stakeholders to understand their data needs and ensure that the data is structured in a way that makes it accessible and usable. Stay up-to-date with the latest trends and technologies in the data engineering space, such as new data storage solutions, processing frameworks, and cloud technologies. Evaluate and implement new tools to improve data engineering processes. Qualification : Bachelor's or Master's in Computer Science & Engineering, or equivalent. Professional Degree in Data Science, Engineering is desirable. Experience level : At least 3 - 5 years hands-on experience in Data Engineering, ETL. Desired Knowledge & Experience : Spark: Spark 3.x, RDD/DataFrames/SQL, Batch/Structured Streaming Knowing Spark internals: Catalyst/Tungsten/Photon Databricks: Workflows, SQL Warehouses/Endpoints, DLT, Pipelines, Unity, Autoloader IDE: IntelliJ/Pycharm, Git, Azure Devops, Github Copilot Test: pytest, Great Expectations CI/CD Yaml Azure Pipelines, Continuous Delivery, Acceptance Testing Big Data Design: Lakehouse/Medallion Architecture, Parquet/Delta, Partitioning, Distribution, Data Skew, Compaction Languages: Python/Functional Programming (FP) SQL: TSQL/Spark SQL/HiveQL Storage: Data Lake and Big Data Storage Design additionally it is helpful to know basics of: Data Pipelines: ADF/Synapse Pipelines/Oozie/Airflow Languages: Scala, Java NoSQL: Cosmos, Mongo, Cassandra Cubes: SSAS (ROLAP, HOLAP, MOLAP), AAS, Tabular Model SQL Server: TSQL, Stored Procedures Hadoop: HDInsight/MapReduce/HDFS/YARN/Oozie/Hive/HBase/Ambari/Ranger/Atlas/Kafka Data Catalog: Azure Purview, Apache Atlas, Informatica Required Soft skills & Other Capabilities : Great attention to detail and good analytical abilities. Good planning and organizational skills Collaborative approach to sharing ideas and finding solutions Ability to work independently and also in a global team environment. Show more Show less

Posted 3 months ago

Apply

4.5 - 6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Roles & Responsibilities Have hands on experience on real time ML Models / Projects Coding in Python Language, Machine Learning, Basic SQL, Git, MS Excel Experience in using IDE like Jupyter Notebook, Spyder, PyCharm Hands on with AWS Services like S3 bucket, EC2, Sagemaker, Step Functions. Engage with clients/consultants to understand requirements Taking ownership of delivering ML models with high precision outcomes. Accountable for high quality and timely completion of specified work deliverables Write codes that are well detailed structured and compute efficient Experience 4.5-6 Years Skills Primary Skill: AI/ML Development Sub Skill(s): AI/ML Development Additional Skill(s): AI/ML Development, TensorFlow, NLP, Pytorch About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru. Show more Show less

Posted 3 months ago

Apply

5.0 - 9.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Position-Azure Data Engineer Location- Pune Mandatory Skills- Azure Databricks, pyspark Experience-5 to 9 Years Notice Period- 0 to 30 days/ Immediately Joiner/ Serving Notice period Must have Experience: Strong design and data solutioning skills PySpark hands-on experience with complex transformations and large dataset handling experience Good command and hands-on experience in Python. Experience working with following concepts, packages, and tools, Object oriented and functional programming NumPy, Pandas, Matplotlib, requests, pytest Jupyter, PyCharm and IDLE Conda and Virtual Environment Working experience must with Hive, HBase or similar Azure Skills Must have working experience in Azure Data Lake, Azure Data Factory, Azure Databricks, Azure SQL Databases Azure DevOps Azure AD Integration, Service Principal, Pass-thru login etc. Networking – vnet, private links, service connections, etc. Integrations – Event grid, Service Bus etc. Database skills Oracle, Postgres, SQL Server – any one database experience Oracle PL/SQL or T-SQL experience Data modelling Thank you Show more Show less

Posted 3 months ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary Role : Sr. Tester Good communication and collaboration skills. Demonstrated experience in developing Python scripts for testing using IDEs such as PyCharm or IDLE Responsibilities At least 5+ years experience in front-end (UI) and back-end automation (APIs) Ability to work as part of a Scrum team Demonstrated ability to use JSON Experience in testing frameworks like Pytest Selenium Experience with automation testing of UI features using Selenium Webdriver Good knowledge and experience in version control tools like git Experience in automating tests in CI/CD Experience in RESTful API testing using Postman or Katalon or JMeter Proficient in SQL queries and hands-on exposure to various databases Experience in Project Management tools like Jira Rally ServiceNow Knowledge/Exposure to AWS tools like: Lambdas DynamoDB Step Functions S3 Show more Show less

Posted 3 months ago

Apply

0 years

0 - 0 Lacs

Thiruvananthapuram

On-site

Data Science and AI Developer **Job Description:** We are seeking a highly skilled and motivated Data Science and AI Developer to join our dynamic team. As a Data Science and AI Developer, you will be responsible for leveraging cutting-edge technologies to develop innovative solutions that drive business insights and enhance decision-making processes. **Key Responsibilities:** 1. Develop and deploy machine learning models for predictive analytics, classification, clustering, and anomaly detection. 2. Design and implement algorithms for data mining, pattern recognition, and natural language processing. 3. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. 4. Utilize advanced statistical techniques to analyze complex datasets and extract actionable insights. 5. Implement scalable data pipelines for data ingestion, preprocessing, feature engineering, and model training. 6. Stay updated with the latest advancements in data science, machine learning, and artificial intelligence research. 7. Optimize model performance and scalability through experimentation and iteration. 8. Communicate findings and results to stakeholders through reports, presentations, and visualizations. 9. Ensure compliance with data privacy regulations and best practices in data handling and security. 10. Mentor junior team members and provide technical guidance and support. **Requirements:** 1. Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. 2. Proven experience in developing and deploying machine learning models in production environments. 3. Proficiency in programming languages such as Python, R, or Scala, with strong software engineering skills. 4. Hands-on experience with machine learning libraries/frameworks such as TensorFlow, PyTorch, Scikit-learn, or Spark MLlib. 5. Solid understanding of data structures, algorithms, and computer science fundamentals. 6. Excellent problem-solving skills and the ability to think creatively to overcome challenges. 7. Strong communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. 8. Certification in Data Science, Machine Learning, or Artificial Intelligence (e.g., Coursera, edX, Udacity, etc.). 9. Experience with cloud platforms such as AWS, Azure, or Google Cloud is a plus. 10. Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) is an advantage. Data Manipulation and Analysis : NumPy, Pandas Data Visualization : Matplotlib, Seaborn, Power BI Machine Learning Libraries : Scikit-learn, TensorFlow, Keras Statistical Analysis : SciPy Web Scrapping : Scrapy IDE : PyCharm, Google Colab HTML/CSS/JavaScript/React JS Proficiency in these core web development technologies is a must. Python Django Expertise: In-depth knowledge of e-commerce functionalities or deep Python Django knowledge. Theming: Proven experience in designing and implementing custom themes for Python websites. Responsive Design: Strong understanding of responsive design principles and the ability to create visually appealing and user-friendly interfaces for various devices. Problem Solving: Excellent problem-solving skills with the ability to troubleshoot and resolve issues independently. Collaboration: Ability to work closely with cross-functional teams, including marketing and design, to bring creative visions to life. interns must know about how to connect front end with datascience Also must Know to connect datascience to frontend **Benefits:** - Competitive salary package - Flexible working hours - Opportunities for career growth and professional development - Dynamic and innovative work environment Job Type: Full-time Pay: ₹8,000.00 - ₹12,000.00 per month Schedule: Day shift Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person

Posted 3 months ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Client: Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media. Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia. Job Title: Python Automation Tester Location : Hyderabad,Bangalore,Chennai Experience : 6-8 Years Job Typ e: Contract to Hire Notice Period : Immediate Joiners Mandatory Skills: Python Automation,Front-end (UI),Back-end automation (APIs),PyCharm or IDLE,Pytest, Selenium,Selenium Webdriver,Git,CICD,RESTful API testing Job description: At least +3 years experience in front-end (UI) and back-end automation (APIs) Demonstrated experience in developing Python scripts for testing, using IDEs such as PyCharm or IDLE Experience in testing frameworks like Pytest, Selenium Experience with automation testing of UI features using Selenium Webdriver Good knowledge and experience in version control tools like git Experience in automating tests in CI/CD Experience in RESTful API testing using Postman or Katalon or JMeter Ability to work as part of a Scrum team Good communication and collaboration skills Show more Show less

Posted 3 months ago

Apply

0.0 years

0 Lacs

Thiruvananthapuram, Kerala

On-site

Data Science and AI Developer **Job Description:** We are seeking a highly skilled and motivated Data Science and AI Developer to join our dynamic team. As a Data Science and AI Developer, you will be responsible for leveraging cutting-edge technologies to develop innovative solutions that drive business insights and enhance decision-making processes. **Key Responsibilities:** 1. Develop and deploy machine learning models for predictive analytics, classification, clustering, and anomaly detection. 2. Design and implement algorithms for data mining, pattern recognition, and natural language processing. 3. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. 4. Utilize advanced statistical techniques to analyze complex datasets and extract actionable insights. 5. Implement scalable data pipelines for data ingestion, preprocessing, feature engineering, and model training. 6. Stay updated with the latest advancements in data science, machine learning, and artificial intelligence research. 7. Optimize model performance and scalability through experimentation and iteration. 8. Communicate findings and results to stakeholders through reports, presentations, and visualizations. 9. Ensure compliance with data privacy regulations and best practices in data handling and security. 10. Mentor junior team members and provide technical guidance and support. **Requirements:** 1. Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. 2. Proven experience in developing and deploying machine learning models in production environments. 3. Proficiency in programming languages such as Python, R, or Scala, with strong software engineering skills. 4. Hands-on experience with machine learning libraries/frameworks such as TensorFlow, PyTorch, Scikit-learn, or Spark MLlib. 5. Solid understanding of data structures, algorithms, and computer science fundamentals. 6. Excellent problem-solving skills and the ability to think creatively to overcome challenges. 7. Strong communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. 8. Certification in Data Science, Machine Learning, or Artificial Intelligence (e.g., Coursera, edX, Udacity, etc.). 9. Experience with cloud platforms such as AWS, Azure, or Google Cloud is a plus. 10. Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) is an advantage. Data Manipulation and Analysis : NumPy, Pandas Data Visualization : Matplotlib, Seaborn, Power BI Machine Learning Libraries : Scikit-learn, TensorFlow, Keras Statistical Analysis : SciPy Web Scrapping : Scrapy IDE : PyCharm, Google Colab HTML/CSS/JavaScript/React JS Proficiency in these core web development technologies is a must. Python Django Expertise: In-depth knowledge of e-commerce functionalities or deep Python Django knowledge. Theming: Proven experience in designing and implementing custom themes for Python websites. Responsive Design: Strong understanding of responsive design principles and the ability to create visually appealing and user-friendly interfaces for various devices. Problem Solving: Excellent problem-solving skills with the ability to troubleshoot and resolve issues independently. Collaboration: Ability to work closely with cross-functional teams, including marketing and design, to bring creative visions to life. interns must know about how to connect front end with datascience Also must Know to connect datascience to frontend **Benefits:** - Competitive salary package - Flexible working hours - Opportunities for career growth and professional development - Dynamic and innovative work environment Job Type: Full-time Pay: ₹8,000.00 - ₹12,000.00 per month Schedule: Day shift Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person

Posted 3 months ago

Apply

0.0 - 5.0 years

5 - 9 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Write effective, scalable code Develop back-end components to improve responsiveness and overall performance Integrate user-facing elements into applications Test and debug programs Improve functionality of existing systems Required Candidate profile Expertise in at least one popular Python framework (like Django, Flask or Pyramid) Familiarity with front-end technologies (like JavaScript and HTML5) Team spirit Good problem-solving skills Perks and benefits Free meals and snacks. Bonus. Vision insurance.

Posted 3 months ago

Apply

0.0 - 5.0 years

5 - 9 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Write effective, scalable code Develop back-end components to improve responsiveness and overall performance Integrate user-facing elements into applications Test and debug programs Improve functionality of existing systems Required Candidate profile Expertise in at least one popular Python framework (like Django, Flask or Pyramid) Familiarity with front-end technologies (like JavaScript and HTML5) Team spirit Good problem-solving skills Perks and benefits Free meals and snacks. Bonus. Vision insurance.

Posted 3 months ago

Apply

0.0 years

0 Lacs

India

On-site

About The Job Duration: 12 Months Location: PAN INDIA Timings: Full Time (As per company timings) Notice Period: within 15 days or immediate joiner Experience: 0- 2 years Requirements Java: Design, develop and maintain both new and existing code, ranging from client-side development (using Angular, JavaScript, HTML and CSS) to server-side (using Java and Spring Boot, and T-SQL for data persistence and retrieval) Write readable, extensible, testable code while being mindful of performance requirements Create, maintain, and run unit tests for both new and existing code to deliver defect-free and well-tested code to QA. Conduct design and code reviews and collaborate to ensure your own code passes review Leverage our Cloud infrastructure (AWS) to engineer solutions that make the best of it Adhere to best practice development standards Stay abreast of developments in web applications and programming languages Requirements Strong Core Java 6+/ Java EE hands-on skills Use of any of the following IDEs - PyCharm for Python, Eclipse or IntelliJ for Java, VSCode for HTML/CSS/Javascript. Strong knowledge of OOP principles, including design patterns Good understanding of a relational database engine such as SQL Server Experience with writing SQL queries on databases like SQL Server Strong fundamentals in algorithms and data structures Experience with modern software development life-cycle Eager to learn, work and deliver independently Speak and write fluently in English Python Should be proficient in the following Standard library and OOP in python Python dependency management through pip Sphinx documentation engine Setup tools Pandas and Numpy Flask framework Jinja templating engine Celery Any production-ready WSGI server such as Gunicorn or uWSGI Other Personal Characteristics Dynamic, engaging, self-reliant developer Ability to deal with ambiguity Manage a collaborative and analytical approach Self-confident and humble Open to continuous learning Intelligent, rigorous thinker who can operate successfully amongst bright people Be equally comfortable and capable of interacting with technologists as they are with business executives. Show more Show less

Posted 3 months ago

Apply

5.0 - 7.0 years

9 - 11 Lacs

Hyderabad

Work from Office

Role: PySpark DeveloperLocations:MultipleWork Mode: Hybrid Interview Mode: Virtual (2 Rounds) Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark, Python, and working with modern data engineering tools in cloud environments such as AWS. Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments

Posted 3 months ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Relevant Experience: 7+ Years Work Location: Hyderabad Working Days: 5 Days Notice Period: Immediate or 15 Days Mandatory Skills • 4+ years of hand on work experience in writing code in Python • Experience in using various Python libraries like Pandas, NumPy • Experience in writing good quality code in Python and code refactoring techniques (e.g., IDE’s – PyCharm, Visual Studio Code; Libraries – Pylint, pycodestyle, pydocstyle, Black) • Strong experience on AI assisted coding experience. • AI assisted coding for existing IDE's like vscode. • Experimented multiple AI assisted tools and done research around it. • Deep understanding of data structures, algorithms, and excellent problem-solving skills • Experience in Python, Exploratory Data Analysis (EDA), Feature Engineering, Data Visualisation • Machine Learning libraries like Scikit-learn, XGBoost • Experience in CV, NLP or Time Series. • Experience in building models for ML tasks (Regression, Classification) • Should have Experience into LLM, LLM Fine Tuning, Chatbot, RAG Pipeline Chatbot, LLM Solution, Multi Modal LLM Solution, GPT, Prompt, Prompt Engineering, Tokens, Context Window, Attention Mechanism, Embeddings • Experience of model training and serving on any of the cloud environments (AWS, GCP, Azure) • Experience in distributed training of models on Nvidia GPU’s • Familiarity in Dockerizing the model and create model end points (Rest or gRPC) • Strong working knowledge of source code control tools such as Git, Bitbucket • Prior experience of designing, developing and maintaining Machine Learning solution through its Life Cycle is highly advantageous • Strong drive to learn and master new technologies and techniques • Strong communication and collaboration skills • Good attitude and self-motivated Show more Show less

Posted 3 months ago

Apply

4.0 - 8.0 years

11 - 15 Lacs

Chennai

Work from Office

Overview We are seeking Senior AI Engineers to join our team, working with a leading FinTech client to develop innovative AI solutions. Our client is building a GenAI Developer Assistant—an end-to-end GenAI-driven SDLC assistant framework. This platform leverages advanced GenAI technologies to accelerate product development and deployment, fostering innovation and delivering services to customers swiftly. Responsibilities The Senior AI Engineer focuses on fine-tuning and optimizing large language models to meet the customer’s SDLC requirements, ensuring relevant outputs for tasks like code suggestions and documentation. Optimize model performance for specific use cases and integrate LLMs into SDLC workflows. The role involves managing vector and graph databases to support efficient data retrieval in the GenAI pipeline, designing and maintaining database structures, and optimizing query performance for the RAG pipeline. Additionally, work closely with AI engineers to align data needs with model performance and implement strategies for scaling the database systems to support growth. Core Tools & Technologies: Programming Languages: Python, TypeScript, Kotlin (for IDE plugin development and AI integrations). IDEs: PyCharm, IntelliJ, VS Code (focus on building AI capabilities and integration with these IDEs). Database Management: Vector databases (e.g., Pinecone, Weaviate), Graph databases (e.g., Neo4j). RAG Pipelines: Experience with implementing and optimizing Retrieval-Augmented Generation pipelines for LLMs. LLM Tools: Familiarity with GPT models, Llama2, or Code Bison for fine-tuning and optimization. Database Optimization: Query performance tuning, indexing strategies, and managing database scalability. Good to Have Tools: Testing Frameworks: pytest, JUnit (for testing integrations). Version Control: Git (to manage SDLC workflow integration). Collaboration Tools: Slack, Jira (for communication and project tracking). Multi-Agent Frameworks: Experience with frameworks that support multi-agent coordination and interactions.

Posted 3 months ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary Role : Sr. Tester Good communication and collaboration skills. Demonstrated experience in developing Python scripts for testing using IDEs such as PyCharm or IDLE Responsibilities At least 3+ years experience in front-end (UI) and back-end automation (APIs) Ability to work as part of a Scrum team Demonstrated ability to use JSON Experience in testing frameworks like Pytest Selenium Experience with automation testing of UI features using Selenium Webdriver Good knowledge and experience in version control tools like git Experience in automating tests in CI/CD Experience in RESTful API testing using Postman or Katalon or JMeter Proficient in SQL queries and hands-on exposure to various databases Experience in Project Management tools like Jira Rally ServiceNow Knowledge/Exposure to AWS tools like: Lambdas DynamoDB Step Functions S3 Show more Show less

Posted 3 months ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About BNP Paribas India Solutions Established in 2005, BNP Paribas India Solutions is a wholly owned subsidiary of BNP Paribas SA, European Union’s leading bank with an international reach. With delivery centers located in Bengaluru, Chennai and Mumbai, we are a 24x7 global delivery center. India Solutions services three business lines: Corporate and Institutional Banking, Investment Solutions and Retail Banking for BNP Paribas across the Group. Driving innovation and growth, we are harnessing the potential of over 10000 employees, to provide support and develop best-in-class solutions. About BNP Paribas Group BNP Paribas is the European Union’s leading bank and key player in international banking. It operates in 65 countries and has nearly 185,000 employees, including more than 145,000 in Europe. The Group has key positions in its three main fields of activity: Commercial, Personal Banking & Services for the Group’s commercial & personal banking and several specialized businesses including BNP Paribas Personal Finance and Arval; Investment & Protection Services for savings, investment, and protection solutions; and Corporate & Institutional Banking, focused on corporate and institutional clients. Based on its strong diversified and integrated model, the Group helps all its clients (individuals, community associations, entrepreneurs, SMEs, corporate and institutional clients) to realize their projects through solutions spanning financing, investment, savings and protection insurance. In Europe, BNP Paribas has four domestic markets: Belgium, France, Italy, and Luxembourg. The Group is rolling out its integrated commercial & personal banking model across several Mediterranean countries, Turkey, and Eastern Europe. As a key player in international banking, the Group has leading platforms and business lines in Europe, a strong presence in the Americas as well as a solid and fast-growing business in Asia-Pacific. BNP Paribas has implemented a Corporate Social Responsibility approach in all its activities, enabling it to contribute to the construction of a sustainable future, while ensuring the Group's performance and stability Commitment to Diversity and Inclusion At BNP Paribas, we passionately embrace diversity and are committed to fostering an inclusive workplace where all employees are valued, respected and can bring their authentic selves to work. We prohibit Discrimination and Harassment of any kind and our policies promote equal employment opportunity for all employees and applicants, irrespective of, but not limited to their gender, gender identity, sex, sexual orientation, ethnicity, race, colour, national origin, age, religion, social status, mental or physical disabilities, veteran status etc. As a global Bank, we truly believe that inclusion and diversity of our teams is key to our success in serving our clients and the communities we operate in. About Business Line/Function ITG is a group function established recently in ISPL since 2019 with presence in Mumbai, Chennai. We collaborate with various business lines of the group to provide IT Services. Job Title QA Engineer Date Department: ITG - IT Transvesal Location: Chennai Business Line / Function Compliance IT Reports To (Direct) ISPL CPL IT Manager Grade (if applicable) (Functional) Number Of Direct Reports 1 Directorship / Registration NA Position Purpose Provide a brief description of the overall purpose of the position, why this position exists and how it will contribute in achieving the team’s goal. In the context of a strategic transformation of the Compliance Data for BNPP, the QA Engineer will help to validate the business requirements and automate the same. Align with the local team lead, the QA Engineer will be responsible to test all user story on their backlog with the good level of quality and increase automation coverage for the application. Responsibilities Direct Responsibilities Requirement analysis of application under test Validation of the assigned user stories Ensure quality of testing Ensure a good report of advancement to the team lead. Ability to drive the deliverables for self and the team when needed. Automate E2E workflows of the application Ensure to increase automation and penetration coverage Contributing Responsibilities Ensure a good level of commitments to avoid global schedule shift due to dependencies Technical & Behavioral Competencies Expert in Automation using Selenium Cucumber BDD or robot framework Expert in designing the automation framework Expert in writing automated scripts Experience in DevOps Experience in SQL queries or MongoDB Good to have experience in API testing Experience in Functional and end to end testing Experience in Agile & Scrum Specific Qualifications (if Required) Selenium Cucumber BDD or Robot framework , DevOps, Intellij, Gitlab (Pipeline CI/CD), Python, PyCharm, JIRA, ALM Octane Skills Referential Behavioural Skills: (Please select up to 4 skills) Ability to collaborate / Teamwork Attention to detail / rigor Organizational skills Communication skills - oral & written Transversal Skills: (Please select up to 5 skills) Ability to understand, explain and support change Ability to manage a project Choose an item. Choose an item. Choose an item. Education Level Choose an item. Experience Level At least 5 years Show more Show less

Posted 3 months ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Description Data Engineer Responsibilities : Deliver end-to-end data and analytics capabilities, including data ingest, data transformation, data science, and data visualization in collaboration with Data and Analytics stakeholder groups Design and deploy databases and data pipelines to support analytics projects Develop scalable and fault-tolerant workflows Clearly document issues, solutions, findings and recommendations to be shared internally & externally Learn and apply tools and technologies proficiently, including: Languages: Python, PySpark, ANSI SQL, Python ML libraries Frameworks/Platform: Spark, Snowflake, Airflow, Hadoop , Kafka Cloud Computing: AWS Tools/Products: PyCharm, Jupyter, Tableau, PowerBI Performance optimization for queries and dashboards Develop and deliver clear, compelling briefings to internal and external stakeholders on findings, recommendations, and solutions Analyze client data & systems to determine whether requirements can be met Test and validate data pipelines, transformations, datasets, reports, and dashboards built by team Develop and communicate solutions architectures and present solutions to both business and technical stakeholders Provide end user support to other data engineers and analysts Candidate Requirements Expert experience in the following[Should have/Good to have]: SQL, Python, PySpark, Python ML libraries. Other programming languages (R, Scala, SAS, Java, etc.) are a plus Data and analytics technologies including SQL/NoSQL/Graph databases, ETL, and BI Knowledge of CI/CD and related tools such as Gitlab, AWS CodeCommit etc. AWS services including EMR, Glue, Athena, Batch, Lambda CloudWatch, DynamoDB, EC2, CloudFormation, IAM and EDS Exposure to Snowflake and Airflow. Solid scripting skills (e.g., bash/shell scripts, Python) Proven work experience in the following: Data streaming technologies Big Data technologies including, Hadoop, Spark, Hive, Teradata, etc. Linux command-line operations Networking knowledge (OSI network layers, TCP/IP, virtualization) Candidate should be able to lead the team, communicate with business, gather and interpret business requirements Experience with agile delivery methodologies using Jira or similar tools Experience working with remote teams AWS Solutions Architect / Developer / Data Analytics Specialty certifications, Professional certification is a plus Bachelor Degree in Computer Science relevant field, Masters Degree is a plus Show more Show less

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies