Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
Join the SASA! We are committed to creating a welcoming and supportive environment for all employees. Our goal is to foster a culture of respect, open communication, and collaboration. In our dynamic and challenging work environment, you will have the opportunity to engage in cutting-edge projects for a diverse clientele. As a part of our team of experts, you will work with individuals who are dedicated to excellence, innovation, and collaboration. We value employees who take ownership of their work, pursue continuous learning and development, and strive for both professional and personal growth. If you are seeking a challenging and rewarding career in software development and consulting, we invite you to explore our current job openings and apply today. We are excited to hear from you! Current Openings: - Senior PHP Developer - JAVA Springboot Developer - React Native App Developer - Data Science & Architect - Urgent* PHP Developer (Symfony, Laravel, React JS, Angular JS) - Team Leader (PHP & JS Framework) - Business Development Executive - Web Designer - Mobile Apps Developers (Flutter & React Native) - AR & VR Developer - Data Science & Architect Experience: Minimum 3 years Key skills/experience required: - Hands-on experience in technologies such as Python, R Lang, Tableau, SQL, Apache Hadoop, AWS, Microsoft Azure, Google Cloud Platform (GCP), Apache Kafka, & Airflow - Knowledge of SaaS apps architecture - Proficiency with Git, JIRA, CI/CD - Fluent in English for effective communication and participation in client meetings,
Posted 16 hours ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
As a Senior Engineer at Impetus Technologies, you will play a crucial role in designing, developing, and deploying scalable data processing applications using Java and Big Data technologies. Your responsibilities will include collaborating with cross-functional teams, mentoring junior engineers, and contributing to architectural decisions to enhance system performance and scalability. Your key responsibilities will revolve around designing and maintaining high-performance applications, implementing data ingestion and processing workflows using frameworks like Hadoop and Spark, and optimizing existing applications for improved performance and reliability. You will also be actively involved in mentoring junior engineers, participating in code reviews, and staying updated with the latest technology trends in Java and Big Data. To excel in this role, you should possess a strong proficiency in Java programming language, hands-on experience with Big Data technologies such as Apache Hadoop and Apache Spark, and an understanding of distributed computing concepts. Additionally, you should have experience with data processing frameworks and databases, strong problem-solving skills, and excellent communication and teamwork abilities. In this role, you will collaborate with a diverse team of skilled engineers, data scientists, and product managers who are passionate about technology and innovation. The team environment encourages knowledge sharing, continuous learning, and regular technical workshops to enhance your skills and keep you updated with industry trends. Overall, as a Senior Engineer at Impetus Technologies, you will be responsible for designing and developing scalable Java applications for Big Data processing, ensuring code quality and performance, and troubleshooting and optimizing existing systems to enhance performance and scalability. Qualifications: - Strong proficiency in Java programming language - Hands-on experience with Big Data technologies such as Hadoop, Spark, and Kafka - Understanding of distributed computing concepts - Experience with data processing frameworks and databases - Strong problem-solving skills - Knowledge of version control systems and CI/CD pipelines - Excellent communication and teamwork abilities - Bachelor's or master's degree in Computer Science, Engineering, or related field preferred Experience: 7 to 10 years Job Reference Number: 13131,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
bhubaneswar
On-site
As a Pyspark Developer_VIS, your primary responsibility will be to develop high-performance Pyspark applications for large-scale data processing. You will collaborate with data engineers and analysts to integrate data pipelines and design ETL processes using Pyspark. Optimizing existing data models and workflows to enhance overall performance is also a key aspect of your role. Additionally, you will need to analyze large datasets to derive actionable insights and ensure data quality and integrity throughout the data processing lifecycle. Utilizing SQL for querying databases and validating data is essential, along with working with cloud technologies to deploy and maintain data solutions. You will participate in code reviews, maintain version control, and document all processes, workflows, and system changes clearly. Providing support in resolving production issues and assisting stakeholders, as well as mentoring junior developers on best practices in data processing, are also part of your responsibilities. Staying updated on emerging technologies and industry trends, implementing data security measures, contributing to team meetings, and offering insights for project improvements are other expectations from this role. Qualifications required for this position include a Bachelor's degree in Computer Science, Engineering, or a related field, along with 3+ years of experience in Pyspark development and data engineering. Strong proficiency in SQL and relational databases, experience with ETL tools and data processing frameworks, familiarity with Python for data manipulation and analysis, and knowledge of big data technologies such as Apache Hadoop and Spark are necessary. Experience working with cloud platforms like AWS or Azure, understanding data warehousing concepts and strategies, excellent problem-solving and analytical skills, attention to detail, commitment to quality, ability to work independently and as part of a team, excellent communication and interpersonal skills, experience with version control systems like Git, managing multiple priorities in a fast-paced environment, willingness to learn and adapt to new technologies, strong organizational skills, and meeting deadlines are also essential for this role. In summary, the ideal candidate for the Pyspark Developer_VIS position should possess a diverse skill set including cloud technologies, big data, version control, data warehousing, Pyspark, ETL, Python, Azure, Apache Hadoop, data analysis, Apache Spark, SQL, AWS, and more. ,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Java Developer, you will be responsible for analyzing, designing, programming, debugging, and modifying software enhancements and/or new products used in various computer programs. Your expertise in Java, Spring MVC, Spring Boot, Database design, and query handling will be utilized to write code, complete programming, and perform testing and debugging of applications. You will work on local, networked, cloud-based, or Internet-related computer programs, ensuring the code meets the necessary standards for commercial or end-user applications such as materials management, financial management, HRIS, mobile apps, or desktop applications products. Your role will involve working with RESTful Web Services/Microservices for JSON creation, data parsing/processing using batch and stream mode, and messaging platforms like Kafka, Pub/Sub, ActiveMQ, among others. Proficiency in OS, Linux, virtual machines, and open source tools/platforms is crucial for successful implementation. Additionally, you will be expected to have an understanding of data modeling and storage with NoSQL or relational DBs, as well as experience with Jenkins, Containerized Microservices deployment in Cloud environments, and Big Data development (Spark, Hive, Impala, Time-series DB). To excel in this role, you should have a solid understanding of building Microservices/Webservices using Java frameworks, REST API standards and practices, and object-oriented analysis and design patterns. Experience with cloud technologies like Azure, AWS, and GCP will be advantageous. A candidate with Telecom domain experience and familiarity with protocols such as TCP, UDP, SNMP, SSH, FTP, SFTP, Corba, SOAP will be preferred. Additionally, being enthusiastic about work, passionate about coding, a self-starter, and proactive will be key qualities for success in this position. Strong communication, analytical, and problem-solving skills are essential, along with the ability to write quality/testable/modular code. Experience in Big Data platforms, participation in Agile Development methodologies, and working in a start-up environment will be beneficial. Team leading experience is an added advantage, and immediate joiners will be given special priority. If you possess the necessary skills and experience, have a keen interest in software development, and are ready to contribute to a dynamic team environment, we encourage you to apply for this role.,
Posted 6 days ago
2.0 - 15.0 years
0 Lacs
hyderabad, telangana
On-site
You will be responsible for understanding complex and critical business problems and formulating an integrated analytical approach to mine data sources. Your role will involve employing statistical methods and machine learning algorithms to contribute to solving unmet medical needs, discovering actionable insights, and automating processes to reduce effort and time for repeated use. You will manage the implementation and adherence to the overall data lifecycle of enterprise data, from data acquisition or creation through enrichment, consumption, retention, and retirement. This will enable the availability of useful, clean, and accurate data throughout its lifecycle. Your high agility will allow you to work across various business domains effectively. Additionally, you will integrate business presentations, smart visualization tools, and contextual storytelling to translate findings back to business users with a clear impact. You will independently manage budgets, ensure appropriate staffing, and coordinate projects within the area. If you are managing a team, you will empower the team, provide guidance and coaching, with initial guidance from more senior leaders supervised. This role may be your first people manager experience. In this role, your major accountabilities will include project managing your tasks, collaborating with allied team members, and proactively planning and managing change. You will work with internal stakeholders, external partners, and cross-functional teams to solve critical business problems. Understanding life science data sources, researching and co-developing new algorithms, methods, statistical models, and business models will also be part of your responsibilities. You will quickly learn to use tools, data sources, and analytical techniques needed to answer a wide range of critical business questions. Supporting the evaluation of technology needs, proposing research articles for application to business problems, developing automation for data management, and articulating solutions to business users will be key aspects of your role. Furthermore, you will work with senior data science team members to present analytical content concisely and effectively, provide understandable and actionable business intelligence for key stakeholders, and may lead a small team or provide in-depth technical expertise in a scientific/technical field, depending on your career path. Reporting of technical complaints, adverse events, and special case scenarios related to Novartis products within 24 hours of receipt, as well as distribution of marketing samples, may also be required. Your key performance indicators will include the achievement and governance endorsement of data management and advanced analytics program tollgates, as well as successful implementation based on KPIs and program objectives. Minimum Requirements: - In-depth mastery of the external environment and trends. - 15+ years of relevant experience in Data Science. - 2+ years of relevant experience in Data Science. - Track record delivering global solutions at scale. - Strong professional network across Academia. Required Skills: - Apache Hadoop. - Applied Mathematics. - Big Data. - Curiosity. - Data Governance. - Data Literacy. - Data Management. - Data Quality. - Data Science. - Data Strategy. - Data Visualization. - Deep Learning. - Machine Learning (ML). - Machine Learning Algorithms. - Master Data Management. - Proteomics. - Python (Programming Language). - R (Programming Language). - Statistical Modeling. Language: - English. Novartis is dedicated to helping people with diseases and their families by fostering a community of smart, passionate individuals like yourself. By collaborating, supporting, and inspiring each other, we aim to achieve breakthroughs that positively impact patients" lives. If you are ready to contribute to creating a brighter future, you can learn more about our values and culture at [Novartis Careers](https://www.novartis.com/about/strategy/people-and-culture).,
Posted 1 week ago
3.0 - 10.0 years
0 Lacs
noida, uttar pradesh
On-site
We are looking for a highly motivated and experienced data scientist to lead our team of Gen-Ai Engineers. As a data scientist, you will oversee all processes from data extraction, cleaning, and pre-processing to training models and deploying them to production. The ideal candidate will have a strong passion for artificial intelligence and will stay updated with the latest advancements in this field. Your responsibilities will include utilizing frameworks like Langchain to develop scalable AI solutions, integrating vector databases such as Azure Cognitive Search, Weavite, or Pinecone to support AI model functionalities, and collaborating with cross-functional teams to define problem statements and prototype solutions using generative AI. It will also involve ensuring the robustness, scalability, and reliability of AI systems by implementing best practices in machine learning and software development. You will be required to explore and visualize data to gain insights, identify differences in data distribution that could impact model performance in real-world deployment, verify and ensure data quality through data cleaning, supervise the data acquisition process if additional data is needed, find and utilize available datasets for training, define validation strategies, feature engineering, and data augmentation pipelines, train models, tune hyperparameters, analyze model errors, and devise strategies to overcome them. You will also be responsible for deploying models to production. The ideal candidate should have a Bachelors/Masters degree in computer science, data science, mathematics or a related field, along with 3-10 years of experience in building Gen-Ai applications. Preferred skills include proficiency in statistical techniques such as hypothesis testing, regression analysis, clustering, classification, and time series analysis, expertise in deep learning frameworks like TensorFlow, PyTorch, and Keras, specialization in Deep Learning (NLP) and statistical machine learning, strong Python skills, experience in developing production-grade applications, familiarity with Langchain framework and vector databases, understanding and experience with retrieval algorithms, working knowledge of Big data platforms and technologies like Apache Hadoop, Spark, Kafka, or Hive, and familiarity with deploying applications on Ubuntu/Linux systems. Excellent communication, negotiation, and interpersonal skills are also essential for this role.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
At Improzo, we are dedicated to improving life by empowering our customers through quality-led commercial analytical solutions. Our team of experts in commercial data, technology, and operations collaborates to shape the future and work with leading Life Sciences clients. We prioritize customer success and outcomes, embrace agility and innovation, foster respect and collaboration, and are laser-focused on quality-led execution. As a Data and Reporting Developer (Improzo Level - Associate) at Improzo, you will play a crucial role in designing, developing, and maintaining large-scale data processing systems using big data technologies. You will collaborate with data architects and stakeholders to implement data storage solutions, develop ETL pipelines, integrate various data sources, design and build reports, optimize performance, and ensure seamless data flow. Key Responsibilities: - Design, develop, and maintain scalable data pipelines and big data applications using distributed processing frameworks. - Collaborate on data architecture, storage solutions, ETL pipelines, data lakes, and data warehousing. - Integrate data sources into the big data ecosystem while maintaining data quality. - Design and build reports using tools like Power BI, Tableau, and Microstrategy. - Optimize workflows and queries for high performance and scalability. - Collaborate with cross-functional teams to deliver data solutions that meet business requirements. - Perform testing, quality assurance, and documentation of data pipelines. - Participate in agile development processes and stay up-to-date with big data technologies. Qualifications: - Bachelor's or master's degree in a quantitative field. - 1.5+ years of experience in data management or reporting projects with big data technologies. - Hands-on experience or thorough training in AWS, Azure, GCP, Databricks, and Spark. - Experience in Pharma Commercial setting or Pharma data management is advantageous. - Proficiency in Python, SQL, MDM, Tableau, PowerBI, and other tools. - Excellent communication, presentation, and interpersonal skills. - Attention to detail, quality, and client centricity. - Ability to work independently and as part of a cross-functional team. Benefits: - Competitive salary and benefits package. - Opportunity to work on cutting-edge tech projects in the life sciences industry. - Collaborative and supportive work environment. - Opportunities for professional development and growth.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
As a skilled Senior Engineer at Impetus Technologies, you will utilize your expertise in Java and Big Data technologies to design, develop, and deploy scalable data processing applications. Your responsibilities will include collaborating with cross-functional teams, developing high-quality code, and optimizing data processing workflows. Additionally, you will mentor junior engineers and contribute to architectural decisions to enhance system performance and scalability. Key Responsibilities: - Design, develop, and maintain high-performance applications using Java and Big Data technologies. - Implement data ingestion and processing workflows with frameworks like Hadoop and Spark. - Collaborate with the data architecture team to define efficient data models. - Optimize existing applications for performance, scalability, and reliability. - Mentor junior engineers, provide technical leadership, and promote continuous improvement. - Participate in code reviews and ensure best practices for coding, testing, and documentation. - Stay up-to-date with technology trends in Java and Big Data, and evaluate new tools and methodologies. Skills and Tools Required: - Strong proficiency in Java programming for building complex applications. - Hands-on experience with Big Data technologies like Apache Hadoop, Apache Spark, and Apache Kafka. - Understanding of distributed computing concepts and technologies. - Experience with data processing frameworks and libraries such as MapReduce and Spark SQL. - Familiarity with database systems like HDFS, NoSQL databases (e.g., Cassandra, MongoDB), and SQL databases. - Strong problem-solving skills and the ability to troubleshoot complex issues. - Knowledge of version control systems like Git and familiarity with CI/CD pipelines. - Excellent communication and teamwork skills for effective collaboration. About the Role: You will be responsible for designing and developing scalable Java applications for Big Data processing, collaborating with cross-functional teams to implement innovative solutions, and ensuring code quality and performance through best practices and testing methodologies. About the Team: You will work with a diverse team of skilled engineers, data scientists, and product managers in a collaborative environment that encourages knowledge sharing and continuous learning. Technical workshops and brainstorming sessions will provide opportunities to enhance your skills and stay updated with industry trends. Responsibilities: - Developing and maintaining high-performance Java applications for efficient data processing. - Implementing data integration and processing frameworks using Big Data technologies. - Troubleshooting and optimizing systems to enhance performance and scalability. To succeed in this role, you should have: - Strong proficiency in Java and experience with Big Data technologies and frameworks. - Solid understanding of data structures, algorithms, and software design principles. - Excellent problem-solving skills and the ability to work independently and within a team. - Familiarity with cloud platforms and distributed computing concepts is a plus. Qualification: Bachelor's or Master's degree in Computer Science, Engineering, or related field. Experience: 7 to 10 years Job Reference Number: 13131,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
bhubaneswar
On-site
The software development lead plays a crucial role in developing and configuring software systems, whether it is for the entire product lifecycle or for specific stages. As a Software Development Lead, your main responsibilities include collaborating with different teams to ensure that the software meets client requirements, applying your expertise in technologies and methodologies to effectively support projects, and overseeing the implementation of solutions that improve operational efficiency and product quality. You are expected to act as a subject matter expert (SME) and manage the team to deliver high-quality results. Your role involves making team decisions, engaging with multiple teams to contribute to key decisions, providing solutions to problems for your team and others, and facilitating knowledge sharing sessions to enhance team capabilities. Additionally, you will monitor project progress to ensure alignment with strategic goals. In terms of professional and technical skills, proficiency in AWS BigData is a must. You should have a strong understanding of data processing frameworks like Apache Hadoop and Apache Spark, experience in cloud services and architecture (especially in AWS environments), familiarity with data warehousing solutions and ETL processes, and the ability to implement data security and compliance measures. Candidates applying for this role should have a minimum of 5 years of experience in AWS BigData. The position is based at our Bhubaneswar office, and a 15 years full-time education is required to be eligible for this role.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
maharashtra
On-site
The opportunity available at EY is for a Bigdata Engineer based in Pune, requiring a minimum of 4 years of experience. As a key member of the technical team, you will collaborate with Engineers, Data Scientists, and Data Users in an Agile environment. Your responsibilities will include software design, Scala & Spark development, automated testing, promoting development standards, production support, troubleshooting, and liaising with BAs to ensure correct interpretation and implementation of requirements. You will be involved in implementing tools and processes, handling performance, scale, availability, accuracy, and monitoring. Additionally, you will participate in regular planning and status meetings, provide input in Sprint reviews and retrospectives, and contribute to system architecture and design. Peer code reviews will also be a part of your responsibilities. Key technical skills required for this role include Scala or Java development and design, experience with technologies such as Apache Hadoop, Apache Spark, Spark streaming, YARN, Kafka, Hive, Python, and ETL frameworks. Hands-on experience in building data pipelines using Hadoop components and familiarity with version control tools, automated deployment tools, and requirement management is essential. Knowledge of big data modelling techniques and debugging code issues are also necessary. Desired qualifications include experience with Elastic search, scheduling tools like Airflow and Control-M, understanding of Cloud design patterns, exposure to DevOps & Agile Project methodology, and Hive QL development. The ideal candidate will possess strong communication skills, the ability to collaborate effectively, mentor developers, and lead technical initiatives. A Bachelors or Masters degree in Computer Science, Engineering, or a related field is required. EY is looking for individuals who can work collaboratively across teams, solve complex problems, and deliver practical solutions while adhering to commercial and legal requirements. The organization values agility, curiosity, mindfulness, positive energy, adaptability, and creativity in its employees. EY offers a personalized Career Journey, ample learning opportunities, and resources to help individuals understand their roles and opportunities better. EY is committed to being an inclusive employer that focuses on achieving a balance between delivering excellent client service and supporting the career growth and wellbeing of its employees. As a global leader in assurance, tax, transaction, and advisory services, EY believes in providing training, opportunities, and creative freedom to its employees to help build a better working world. The organization encourages personal and professional growth, offering motivating and fulfilling experiences to help individuals reach their full potential.,
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough