Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
karnataka
On-site
As an AWS Developer at PwC's Advisory Acceleration Center, you will collaborate with the Offshore Manager and Onsite Business Analyst to comprehend requirements and take charge of implementing Cloud data engineering solutions on AWS, such as Enterprise Data Lake and Data hub. With a focus on architecting and delivering scalable cloud-based enterprise data solutions, you will bring your expertise in end-to-end implementation of Cloud data engineering solutions using tools like Snowflake utilities, SnowSQL, SnowPipe, ETL data Pipelines, and Big Data model techniques using Python/Java. Your responsibilities will include loading disparate data sets, translating complex requirements into detailed designs, and deploying Snowflake features like data sharing, events, and lake-house patterns. You are expected to possess a deep understanding of relational and NoSQL data stores, including star and snowflake dimensional modeling, and demonstrate strong hands-on expertise in AWS services such as EMR, Glue, Sagemaker, S3, Redshift, Dynamodb, and AWS Streaming Services like Kinesis, SQS, and MSK. Troubleshooting and performance tuning experience in Spark framework, familiarity with flow tools like Airflow, Nifi, or Luigi, and proficiency in Application DevOps tools like Git, CI/CD frameworks, Jenkins, and Gitlab are essential for this role. Desired skills include experience in building stream-processing systems using solutions like Storm or Spark-Streaming, knowledge in Big Data ML toolkits such as Mahout, SparkML, or H2O, proficiency in Python, and exposure to Offshore/Onsite Engagements and AWS services like STEP & Lambda. Candidates with 2-4 years of hands-on experience in Cloud data engineering solutions, a professional background in BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA, and a passion for problem-solving and effective communication are encouraged to apply to be part of PwC's dynamic and inclusive work culture, where learning, growth, and excellence are at the core of our values. Join us at PwC, where you can make a difference today and shape the future tomorrow!,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
As an AWS Developer at PwC's Acceleration Center in Bangalore, you will be responsible for the end-to-end implementation of Cloud data engineering solutions like Enterprise Data Lake and Data hub in AWS. You will collaborate with Offshore Manager/Onsite Business Analyst to understand requirements and architect scalable, distributed, cloud-based enterprise data solutions. Your role will involve hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, ETL data Pipelines, and Big Data model techniques using Python/Java. You must have a deep understanding of relational and NoSQL data stores, methods, and approaches such as star and snowflake dimensional modeling. Strong expertise in AWS services like EMR, Glue, Sagemaker, S3, Redshift, Dynamodb, and streaming services like Kinesis, SQS, and MSK is essential. Troubleshooting and performance tuning experience in Spark framework, along with knowledge of flow tools like Airflow, Nifi, or Luigi, is required. Experience with Application DevOps tools like Git, CI/CD Frameworks, Jenkins, or Gitlab is preferred. Familiarity with AWS CloudWatch, Cloud Trail, Account Config, Config Rules, and Cloud data migration processes is expected. Good analytical, problem-solving, communication, and presentation skills are essential for this role. Desired skills include building stream-processing systems using Storm or Spark-Streaming, experience in Big Data ML toolkits like Mahout, SparkML, or H2O, and knowledge of Python. Exposure to Offshore/Onsite Engagements and AWS services like STEP and Lambda would be a plus. Candidates with 2-4 years of hands-on experience in cloud data engineering solutions and a background in BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA are encouraged to apply. Travel to client locations may be required based on project needs. This position falls under the Advisory line of service and the Technology Consulting horizontal, with the designation of Associate based in Bangalore, India. If you are passionate about working in a high-performance culture that values diversity, inclusion, and professional development, PwC could be the ideal place for you to grow and excel in your career. Apply now to be part of a global team dedicated to solving important problems and making a positive impact on the world.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Business Intelligence Analyst in our team, you will collaborate with product managers, engineers, and business stakeholders to establish key performance indicators (KPIs) and success metrics for Creator Success. Your role involves creating detailed dashboards and self-service analytics tools utilizing platforms like QuickSight, Tableau, or similar Business Intelligence (BI) tools. You will conduct in-depth analysis on customer behavior, content performance, and livestream engagement patterns. Developing and maintaining robust ETL/ELT pipelines to handle large volumes of streaming and batch data from the Creator Success platform is a key responsibility. Additionally, you will be involved in designing and optimizing data warehouses, data lakes, and real-time analytics systems using AWS services such as Redshift, S3, Kinesis, EMR, and Glue. Ensuring data accuracy and reliability is crucial, and you will implement data quality frameworks and monitoring systems. Your qualifications should include a Bachelor's degree in Computer Science, Engineering, Mathematics, Statistics, or a related quantitative field. With at least 3 years of experience in business intelligence or analytic roles, you should have proficiency in SQL, Python, and/or Scala. Expertise in AWS cloud services like Redshift, S3, EMR, Glue, Lambda, and Kinesis is required. You should have a strong background in building and optimizing ETL pipelines, data warehousing solutions, and big data technologies like Spark and Hadoop. Familiarity with distributed computing frameworks, business intelligence tools (QuickSight, Tableau, Looker), and data visualization best practices is essential. Your proficiency in SQL and Python is highly valued, along with skills in AWS Lambda, QuickSight, Power BI, AWS S3, AWS Kinesis, ETL, Scala, AWS EMR, Hadoop, Spark, AWS Glue, and data warehousing. If you are passionate about leveraging data to drive business decisions and have a strong analytical mindset, we welcome you to join our team and make a significant impact in the field of Business Intelligence.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Senior AWS Data Engineer Cloud Data Platform at Teamware Solutions, a division of Quantum Leap Consulting Pvt. Ltd, located in Bangalore, you will be responsible for end-to-end implementation of Cloud data engineering solutions like Enterprise Data lake and Data hub in AWS. Working onsite in an office environment for 5 days a week, you will collaborate with the Offshore Manager and Onsite Business Analyst to understand the requirements and deliver scalable, distributed, cloud-based enterprise data solutions. You should have a strong background in AWS cloud technology, with 4-8 years of hands-on experience. Proficiency in architecting and delivering highly scalable solutions is a must, along with expertise in Cloud data engineering solutions, Lambda or Kappa Architectures, Data Management concepts, and Data Modelling. You should be proficient in AWS services such as EMR, Glue, S3, Redshift, and DynamoDB, as well as have experience in Big Data frameworks like Hadoop and Spark. Additionally, you must have hands-on experience with AWS compute and storage services, AWS Streaming Services, troubleshooting and performance tuning in Spark framework, and knowledge of Application DevOps tools like Git and CI/CD Frameworks. Familiarity with AWS CloudWatch, Cloud Trail, Account Config, Config Rules, security, key management, data migration processes, and analytical skills is required. Good communication and presentation skills are essential for this role. Desired skills include experience in building stream-processing systems, Big Data ML toolkits, Python, Offshore/Onsite Engagements, flow tools like Airflow, Nifi or Luigi, and AWS services like STEP & Lambda. A professional background in BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA is preferred, and an AWS certified Data Engineer certification is recommended. If you are interested in this position and meet the qualifications mentioned above, please send your resume to netra.s@twsol.com.,
Posted 1 month ago
6.0 - 9.0 years
0 Lacs
Pune, Maharashtra, India
On-site
At Roche you can show up as yourself, embraced for the unique qualities you bring. Our culture encourages personal expression, open dialogue, and genuine connections, where you are valued, accepted and respected for who you are, allowing you to thrive both personally and professionally. This is how we aim to prevent, stop and cure diseases and ensure everyone has access to healthcare today and for generations to come. Join Roche, where every voice matters. The Position The Opportunity The Senior Software Engineer is a member of a talented team in Pune and will apply his/her expert knowledge of Python, Node.js (TypeScript) and AWS in the implementation of complex, enterprise-scale software systems. General responsibilities include requirement analysis, lower level design, implementation, unit testing for components or features and integration with external partner APIs. Works as an individual contributor or in a small team on specific product features with occasional guidance and in coordination with other team mates. Participate in peer code review sessions and enforce quality of deliverables. Job Facts Software Development: This is a hands-on software development position to write high-quality software that will perform at scale, be supportable, and be extensible Process & Operations: Ensure the software deliverables follow existing process guidelines and conform to all existing quality parameters. Follow scaled agile framework guideline for incremental development Mentorship: Mentor and guide junior team members in technical challenges and provide guidance on best practices and quality attributes Technology stack: Primary backend stack is Python based but we constantly explore different technologies and toolsets that are fit-for-purpose. Here is a list of technologies we currently use: Python, Node.js and Java (Good to have) Protobuf, JSON, XML, YAML Git, TortoiseGit Data stream processing framework: Apache Flink, AWS Kinesis, AWS Firehorse AWS services (must be aware about basics of RDS, MSK, EC2, Lambda, Elastic Cache, CloudFront, API Gateway, S3, RDS, NLB/ALB, Security Groups/NACLs/VPCs, Cloud Watch, SNS, SQS) Docker & Kubernetes (Good to have) Experience with Typescript and writing APIs in Node.js Your main responsibilities will include: Design, develop, and implement robust and scalable web applications using Python and Node.js (TypeScript) Write clean, well designed, testable, efficient and maintainable code Develop a new set of APIs and write unit test cases for the same Write reusable code and libraries Collaborate with team members and stakeholder Review code written by fellow junior developers Involve in agile ceremonies like stand-up, sprint planning, and demos with co-workers Who You Are BS/Btech/MS degree in Computer Science or directly related discipline 6-9 years of hands-on industry experience as a Python and Node.js (TypeScript) developer Experience on developing APIs with Python and Node.js (TypeScript) Experience with developing and implementing ML models using the TensorFlow framework. Experience with XGBoost is a plus Experience with event based architecture and in cloud development using AWS Solid understanding of design patterns, object-oriented design and event based architecture Experience in healthcare is not required, but familiarity with healthcare data and workflows is a plus (e.g. HL7, IHE) Knowledge and experience with the Agile development process or SAFe is big plus Great written and verbal communication in English Mindset You will be expected to demonstrate the We@RIS dimensions and help evolve the functions culture beliefs and bring We@RIS to life. The dimensions are: We are passionate about our customers and patients We radically simplify We trust, collaborate & have fun We ALL lead We experiment & learn Are you ready to apply We want someone who thinks beyond the job offered - someone who knows that this position can be a unique opportunity to shape the future of Diagnostics. Who we are A healthier future drives us to innovate. Together, more than 100000 employees across the globe are dedicated to advance science, ensuring everyone has access to healthcare today and for generations to come. Our efforts result in more than 26 million people treated with our medicines and over 30 billion tests conducted using our Diagnostics products. We empower each other to explore new possibilities, foster creativity, and keep our ambitions high, so we can deliver life-changing healthcare solutions that make a global impact. Lets build a healthier future, together. Roche is an Equal Opportunity Employer. Show more Show less
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
haryana
On-site
You should have 8-10 years of operational knowledge in Microservices and .Net Fullstack, with experience in C# or Python development, as well as Docker. Additionally, experience with PostgreSQL or Oracle is required. Knowledge of AWS services such as S3 is necessary, and familiarity with AWS Kinesis and AWS Redshift is preferred. A strong desire to learn new technologies and skills is highly valued. Experience with unit testing and Test-Driven Development (TDD) methodology is considered an asset. You should possess strong team spirit, analytical skills, and the ability to synthesize information. Having a passion for Software Craftsmanship, a culture of excellence, and writing Clean Code is important. Fluency in English is required due to the multicultural and international nature of the team. In this role, you will have the opportunity to develop your technical skills in C# .NET and/or Python, Oracle, PostgreSQL, AWS, ELK (Elasticsearch, Logstash, Kibana), GIT, GitHub, TeamCity, Docker, and Ansible.,
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Bachelor&aposs degree in Computer Science, Engineering, Information Technology, or a related field. 3-4 years of hands-on experience in data engineering, with a strong focus on AWS cloud services. Proficiency in Python for data manipulation, scripting, and automation. Strong command of SQL for data querying, transformation, and database management. Demonstrable Experience With AWS Data Services, Including Amazon S3: Data Lake storage and management. AWS Glue: ETL service for data preparation. Amazon Redshift: Cloud data warehousing. AWS Lambda: Serverless computing for data processing. Amazon EMR: Managed Hadoop framework for big data processing (Spark/PySpark experience highly preferred). AWS Kinesis (or Kafka): Real-time data streaming. Strong analytical, problem-solving, and debugging skills. Excellent communication and collaboration abilities, with the capacity to work effectively in an agile team environment. Responsibilities Troubleshoot and resolve data-related issues and performance bottlenecks in existing pipelines. Develop and maintain data quality checks, monitoring, and alerting mechanisms to ensure data pipeline reliability. Participate in code reviews, contribute to architectural discussions, and promote best practices in data engineering. Show more Show less
Posted 1 month ago
10.0 - 18.0 years
0 Lacs
indore, madhya pradesh
On-site
You should possess a BTech degree in computer science, engineering, or a related field of study, or have 12+ years of related work experience. Additionally, you should have at least 7 years of design and implementation experience with large-scale data-centric distributed applications. It is essential to have professional experience in architecting and operating cloud-based solutions, with a good understanding of core disciplines such as compute, networking, storage, security, and databases. A strong grasp of data engineering concepts like storage, governance, cataloging, data quality, and data modeling is required. Familiarity with various architecture patterns like data lake, data lake house, and data mesh is also important. You should have a good understanding of Data Warehousing concepts and hands-on experience with tools like Hive, Redshift, Snowflake, and Teradata. Experience in migrating or transforming legacy customer solutions to the cloud is highly valued. Moreover, experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, and Data Zone is necessary. A thorough understanding of Big Data ecosystem technologies such as Hadoop, Spark, Hive, and HBase, along with other relevant tools and technologies, is expected. Knowledge in designing analytical solutions using AWS cognitive services like Textract, Comprehend, Rekognition, and Sagemaker is advantageous. You should also have experience with modern development workflows like git, continuous integration/continuous deployment pipelines, static code analysis tooling, and infrastructure-as-code. Proficiency in a programming or scripting language like Python, Java, or Scala is required. Possessing an AWS Professional/Specialty certification or relevant cloud expertise is a plus. In this role, you will be responsible for driving innovation within the Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. You should be capable of leading a technology team, fostering an innovative mindset, and enabling fast-paced deliveries. Adapting to new technologies, learning quickly, and managing high ambiguity are essential skills for this position. You will collaborate with business stakeholders, participate in various architectural, design, and status calls, and showcase good presentation skills when interacting with executives, IT Management, and developers. Furthermore, you will drive technology/software sales or pre-sales consulting discussions, ensure end-to-end ownership of tasks, and maintain high-quality software development with complete documentation and traceability. Fulfilling organizational responsibilities, sharing knowledge and experience with other teams/groups, conducting technical training sessions, and producing whitepapers, case studies, and blogs are also part of this role. The ideal candidate for this position should have 10 to 18 years of experience and be able to reference the job with the number 12895.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
haryana
On-site
You should have 8-10 years of operational knowledge in Microservices and .Net Fullstack, C# or Python development, along with experience in Docker. Additionally, experience with PostgreSQL or Oracle is required. Knowledge of AWS services such as S3 is a must, and familiarity with AWS Kinesis and AWS Redshift is desirable. A genuine interest in mastering new technologies is essential for this role. Experience with unit testing and Test-Driven Development (TDD) methodology will be considered as assets. Strong team spirit, analytical skills, and the ability to synthesize information are key qualities we are looking for. Having a passion for Software Craftsmanship, a culture of excellence, and writing Clean Code is highly valued. Being fluent in English is important as you will be working in a multicultural and international team. In this role, you will have the opportunity to develop your technical skills in the following areas: C# .NET and/or Python programming, Oracle and PostgreSQL databases, AWS services, ELK (Elasticsearch, Logstash, Kibana) stack, as well as version control tools like GIT and GitHub, continuous integration with TeamCity, containerization with Docker, and automation using Ansible.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Are you ready to power the world's connections If you don't think you meet all of the criteria below but are still interested in the job, please apply. Nobody checks every box - we're looking for candidates who are particularly strong in a few areas and have some interest and capabilities in others. Design, develop, and maintain microservices that power Kong Konnect, the Service Connectivity Platform. Working closely with Product Management and teams across Engineering, you will develop software that has a direct impact on our customers" business and Kong's success. This opportunity is hybrid (Bangalore Based) with 3 days in the office and 2 days work from home. Implement, and maintain services that power high bandwidth logging and tracing services for our cloud platform such as indexing and searching logs and traces of API requests powered by Kong Gateway and Kuma Service Mesh. Implement efficient solutions at scale using distributed and multi-tenant cloud storage and streaming systems. Implement cloud systems that are resilient to regional and zonal outages. Participate in an on-call rotation to support services in production, ensuring high performance and reliability. Write and maintain automated tests to ensure code integrity and prevent regressions. Mentor other team members. Undertake additional tasks as assigned by the manager. 5+ years working in a team to develop, deliver, and maintain complex software solutions. Experience in log ingestion, indexing, and search at scale. Excellent verbal and written communication skills. Proficiency with OpenSearch/Elasticsearch and other full-text search engines. Experience with streaming platforms such as Kafka, AWS Kinesis, etc. Operational experience in running large-scale, high-performance internet services, including on-call responsibilities. Experience with JVM and languages such as Java and Scala. Experience with AWS and cloud platforms for SaaS teams. Experience designing, prototyping, building, monitoring, and debugging microservices architectures and distributed systems. Understanding of cloud-native systems like Kubernetes, Gitops, and Terraform. Bachelors or Masters degree in Computer Science. Bonus points if you have experience with columnar stores like Druid/Clickhouse/Pinot, working on new products/startups, contributing to Open Source Software projects, or working or developing L4/L7 proxies such as Nginx, HA-proxy, Envoy, etc. Kong is THE cloud native API platform with the fastest, most adopted API gateway in the world (over 300m downloads!). Loved by developers and trusted with enterprises" most critical traffic volumes, Kong helps startups and Fortune 500 companies build with confidence allowing them to bring solutions to market faster with API and service connectivity that scales easily and securely. 83% of web traffic today is API calls! APIs are the connective tissue of the cloud and the underlying technology that allows software to talk and interact with one another. Therefore, we believe that APIs act as the nervous system of the cloud. Our audacious mission is to build the nervous system that will safely and reliably connect all of humankind! For more information about Kong, please visit konghq.com or follow @thekonginc on Twitter.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
You should have hands-on experience in deploying and managing large-scale dataflow products such as Cribl, Logstash, or Apache NiFi. Additionally, you should be proficient in integrating data pipelines with cloud platforms like AWS, Azure, Google Cloud, and on-premises systems. It is essential to have experience in developing and validating field extraction using regular expressions. A strong understanding of Operating Systems and Networking concepts is required, including Linux/Unix system administration, HTTP, and encryption. You should possess knowledge of software version control, deployment, and build tools following DevOps SDLC practices such as Git, Jenkins, and Jira. Strong analytical and troubleshooting skills are crucial for this role, along with excellent verbal and written communication skills. An appreciation of Agile methodologies, specifically Kanban, is also expected. Desirable skills for this position include enterprise experience with a distributed event streaming platform like Apache Kafka, AWS Kinesis, Google Pub/Sub, or MQ. Experience in infrastructure automation and integration, preferably using Python and Ansible, would be beneficial. Familiarity with cybersecurity concepts, event types, and monitoring requirements is a plus. Experience in Parsing and Normalizing data in Elasticsearch using Elastic Common Schema (ECS) would also be advantageous.,
Posted 1 month ago
7.0 - 12.0 years
10 - 20 Lacs
Bengaluru
Work from Office
8+ Years of exp in Database Technologies: AWS Aurora-PostgreSQL, NoSQL,DynamoDB, MongoDB,Erwin data modeling Exp in pg_stat_statements, Query Execution Plans Exp in Apache Kafka,AWS Kinesis,Airflow,Talend.AWS Exp in CloudWatch,Prometheus,Grafana, Required Candidate profile Exp in GDPR, SOC2, Role-Based Access Control (RBAC), Encryption Standards. Exp in AWS Multi-AZ, Read Replicas, Failover Strategies, Backup Automation. Exp in Erwin, Lucidchart, Confluence, JIRA.
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a Database Designer / Senior Data Engineer at VE3, you will be responsible for architecting and designing modern, scalable data platforms on AWS and/or Azure, ensuring best practices for security, cost optimization, and performance. You will develop detailed data models and document data dictionaries and lineage to support data solutions. Additionally, you will build and optimize ETL/ELT pipelines using languages such as Python, SQL, Scala, and services like AWS Glue, Azure Data Factory, and open-source frameworks like Spark and Airflow. Collaboration is key in this role as you will work closely with data analysts, BI teams, and stakeholders to translate business requirements into data solutions and dashboards. You will also partner with DevOps/Cloud Ops to automate CI/CD for data code and infrastructure, ensuring governance, security, and compliance standards such as GDPR and ISO27001 are met. Monitoring, alerting, and data quality frameworks will be implemented to maintain data integrity. As a mentor, you will guide junior engineers and stay updated on emerging big data and streaming technologies to enhance our toolset. The ideal candidate should have a Bachelor's degree in Computer Science, Engineering, IT, or similar field with at least 3 years of hands-on experience in a Database Designer / Data Engineer role within a cloud environment. Technical skills required include expertise in SQL, proficiency in Python or Scala, and familiarity with cloud services like AWS (Glue, S3, Kinesis, RDS) or Azure (Data Factory, Data Lake Storage, SQL Database). Strong communication skills are essential, along with an analytical mindset to address performance bottlenecks and scaling challenges. A collaborative attitude in agile/scrum settings is highly valued. Nice to have qualifications include certifications in AWS or Azure data analytics, exposure to data science workflows, experience with containerized workloads, and familiarity with DataOps practices and tools. At VE3, we are committed to fostering a diverse and inclusive environment where every voice is heard, and every idea can contribute to tomorrow's breakthrough.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
At LeadSquared, we are committed to staying current with the latest technology trends and leveraging cutting-edge tech stacks to enhance our product. As a member of our engineering team, you will have the opportunity to work closely with the newest web and mobile technologies, tackling challenges related to scalability, performance, security, and cost optimization. Our primary objective is to create the industry's premier SaaS platform for sales execution, making LeadSquared an ideal place to embark on an exciting career. The role we are offering is tailored for developers with a proven track record in developing high-performance microservices using Golang, Redis, and various AWS Services. Your responsibilities will include deciphering business requirements and crafting solutions that are not only secure and scalable but also high-performing and easily testable. Key Requirements: - A minimum of 5 years of experience in constructing high-performance APIs and services, with a preference for Golang. - Proficiency in working with Data Streams such as Kafka or AWS Kinesis. - Hands-on experience with large-scale enterprise applications while adhering to best practices. - Strong troubleshooting and debugging skills, coupled with the ability to design and create reusable, maintainable, and easily debuggable applications. - Proficiency in GIT is essential. Preferred Skills: - Familiarity with Kubernetes and microservices. - Experience with OLAP databases/data warehouses like Clickhouse or Redshift. - Experience in developing and deploying applications on the AWS platform. If you are passionate about cutting-edge technologies, eager to tackle challenging projects, and keen on building innovative solutions, then this role at LeadSquared is the perfect opportunity for you to excel and grow in your career.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
kolkata, west bengal
On-site
You must have knowledge in Azure Datalake, Azure function, Azure Databricks, Azure Data Factory, and PostgreSQL. Working knowledge in Azure DevOps and Git flow would be an added advantage. Alternatively, you should have working knowledge in AWS Kinesis, AWS EMR, AWS Glue, AWS RDS, AWS Athena, and AWS RedShift. Demonstrable expertise in working with timeseries data is essential. Experience in delivering data engineering/data science projects in Industry 4.0 is an added advantage. Knowledge of Palantir is required. You must possess strong problem-solving skills with a focus on sustainable and reusable development. Proficiency in using statistical computer languages like Python/PySpark, Pandas, Numpy, seaborn/matplotlib is necessary. Knowledge in Streamlit.io is a plus. Familiarity with Scala, GoLang, Java, and big data tools such as Hadoop, Spark, Kafka is beneficial. Experience with relational databases like Microsoft SQL Server, MySQL, PostGreSQL, Oracle, and NoSQL databases including Hadoop, Cassandra, MongoDB is expected. Proficiency in data pipeline and workflow management tools like Azkaban, Luigi, Airflow is required. Experience in building and optimizing big data pipelines, architectures, and data sets is crucial. You should possess strong analytical skills related to working with unstructured datasets. Provide innovative solutions to data engineering problems, document technology choices, and integration patterns. Apply best practices for project delivery with clean code. Demonstrate innovation and proactiveness in meeting project requirements. Reporting to: Director- Intelligent Insights and Data Strategy Travel: Must be willing to be deployed at client locations worldwide for long and short terms, flexible for shorter durations within India and abroad.,
Posted 1 month ago
5.0 - 10.0 years
4 - 9 Lacs
Bengaluru
Work from Office
Summary: We are seeking a highly skilled and experienced Snowflake Database Administrator (DBA) to join our team. The ideal candidate will be responsible for the administration, management, and optimization of our Snowflake data platform. The role requires strong expertise in database design, performance tuning, security, and data governance within the Snowflake environment. Key Responsibilities: Administer and manage Snowflake cloud data warehouse environments, including provisioning, configuration, monitoring, and maintenance. Implement security policies, compliance, and access controls. Manage Snowflake accounts and databases in a multi-tenant environment. Monitor the systems and provide proactive solutions to ensure high availability and reliability. Monitor and manage Snowflake costs. Collaborate with developers, support engineers and business stakeholders to ensure efficient data integration. Automate database management tasks and procedures to improve operational efficiency. Stay up to date with the latest Snowflake features, best practices, and industry trends to enhance the overall data architecture. Develop and maintain documentation, including database configurations, processes, and standard operating procedures. Support disaster recovery and business continuity planning for Snowflake environments. Required Qualifications: Bachelors degree in computer science, Information Technology, or a related field. 5+ years of experience in Snowflake operations and administration. Strong knowledge of SQL, query optimization, and performance tuning techniques. Experience in managing security, access controls, and data governance in Snowflake. Familiarity with AWS. Proficiency in Python or Bash. Experience in automating database tasks using Terraform, CloudFormation, or similar tools. Understanding of data modeling concepts and experience working with structured and semi-structured data (JSON, Avro, Parquet). Strong analytical, problem-solving, and troubleshooting skills. Excellent communication and collaboration abilities. Preferred Qualifications: Snowflake certification (e.g., SnowPro Core, SnowPro Advanced: Architect, Administrator). Experience with CI/CD pipelines and DevOps practices for database management. Knowledge of machine learning and analytics workflows within Snowflake. Hands-on experience with data streaming technologies (Kafka, AWS Kinesis, etc.).
Posted 1 month ago
5.0 - 7.0 years
15 - 30 Lacs
Gurugram
Remote
Design, develop, and maintain robust data pipelines and ETL/ELT processes on AWS. Leverage AWS services such as S3, Glue, Lambda, Redshift, Athena, EMR , and others to build scalable data solutions. Write efficient and reusable code using Python for data ingestion, transformation, and automation tasks. Collaborate with cross-functional teams including data analysts, data scientists, and software engineers to support data needs. Monitor, troubleshoot, and optimize data workflows for performance, reliability, and cost efficiency. Ensure data quality, security, and governance across all systems. Communicate technical solutions clearly and effectively with both technical and non-technical stakeholders. Required Skills & Qualifications 5+ years of experience in data engineering roles. Strong hands-on experience with Amazon Web Services (AWS) , particularly in data-related services (e.g., S3, Glue, Lambda, Redshift, EMR, Athena). Proficiency in Python for scripting and data processing. Experience with SQL and working with relational databases. Solid understanding of data architecture, data modeling, and data warehousing concepts. Experience with CI/CD pipelines and version control tools (e.g., Git). Excellent verbal and written communication skills . Proven ability to work independently in a fully remote environment. Preferred Qualifications Experience with workflow orchestration tools like Apache Airflow or AWS Step Functions. Familiarity with big data technologies such as Apache Spark or Hadoop. Exposure to infrastructure-as-code tools like Terraform or CloudFormation. Knowledge of data privacy and compliance standards.
Posted 1 month ago
5.0 - 8.0 years
5 - 7 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities Administer and manage Snowflake cloud data warehouse environments, including provisioning, configuration, monitoring, and maintenance. Implement security policies, compliance, and access controls. Manage Snowflake accounts and databases in a multi-tenant environment. Monitor the systems and provide proactive solutions to ensure high availability and reliability. Monitor and manage Snowflake costs. Collaborate with developers, support engineers and business stakeholders to ensure efficient data integration. Automate database management tasks and procedures to improve operational efficiency. Stay up to date with the latest Snowflake features, best practices, and industry trends to enhance the overall data architecture Develop and maintain documentation, including database configurations, processes, and standard operating procedures. Support disaster recovery and business continuity planning for Snowflake environments. Required Qualifications Bachelor's degree in computer science, Information Technology, or a related field. 5+ years of experience in Snowflake operations and administration. Strong knowledge of SQL, query optimization, and performance tuning techniques. Experience in managing security, access controls, and data governance in Snowflake. Familiarity with AWS. Proficiency in Python or Bash. Experience in automating database tasks using Terraform, CloudFormation, or similar tools. Understanding of data modeling concepts and experience working with structured and semi-structured data (JSON, Avro, Parquet). Strong analytical, problem-solving, and troubleshooting skills. Excellent communication and collaboration abilities. Preferred Qualifications Snowflake certification (e.g., SnowPro Core, SnowPro Advanced: Architect, Administrator). Experience with CI/CD pipelines and DevOps practices for database management. Knowledge of machine learning and analytics workflows within Snowflake. Hands-on experience with data streaming technologies (Kafka, AWS Kinesis, etc.).
Posted 1 month ago
5.0 - 10.0 years
8 - 12 Lacs
Kochi
Work from Office
Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Big Data, Python or R Good to have skills:Scala, SQL Job Summary A Data Scientist is expected to be hands-on to deliver end to end vis a vis projects undertaken in the Analytics space. They must have a proven ability to drive business results with their data-based insights. They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes. Roles and Responsibilities Identify valuable data sources and collection processes Supervise preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns for insurance industry. Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Collaborate with engineering and product development teams Hands-on knowledge of implementing various AI algorithms and best-fit scenarios Has worked on Generative AI based implementations Professional and Technical Skills 3.5-5 years experience in Analytics systems/program delivery; at least 2 Big Data or Advanced Analytics project implementation experience Experience using statistical computer languages (R, Python, SQL, Pyspark, etc.) to manipulate data and draw insights from large data sets; familiarity with Scala, Java or C++ Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications Hands on experience in Azure/AWS analytics platform (3+ years) Experience using variations of Databricks or similar analytical applications in AWS/Azure Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop) Strong mathematical skills (e.g. statistics, algebra) Excellent communication and presentation skills Deploying data pipelines in production based on Continuous Delivery practices. Additional Information Multi Industry domain experience Expert in Python, Scala, SQL Knowledge of Tableau/Power BI or similar self-service visualization tools Interpersonal and Team skills should be top notch Nice to have leadership experience in the past Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)
Posted 2 months ago
10.0 - 14.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Senior ETL & Data Streaming Engineer at DataFlow Group, you will have the opportunity to utilize your extensive expertise in designing, developing, and maintaining robust data pipelines. With over 10 years of experience in the field, you will play a pivotal role in ensuring the scalability, fault-tolerance, and performance of our ETL processes. Your responsibilities will include architecting and building both batch and real-time data streaming solutions using technologies like Talend, Informatica, Apache Kafka, or AWS Kinesis. You will collaborate closely with data architects, data scientists, and business stakeholders to translate data requirements into efficient pipeline solutions and ensure data quality, integrity, and security across all storage solutions. In addition to monitoring, troubleshooting, and optimizing existing data pipelines, you will also be responsible for developing and maintaining comprehensive documentation for all ETL and streaming processes. Your role will involve implementing data governance policies and best practices within the Data Lake and Data Warehouse environments, as well as mentoring junior engineers to foster a culture of technical excellence and continuous improvement. To excel in this role, you should have a strong background in data engineering, with a focus on ETL, ELT, and data pipeline development. Your deep expertise in ETL tools, data streaming technologies, and AWS data services will be essential for success. Proficiency in SQL and at least one scripting language for data manipulation, along with strong database skills, will also be valuable assets in this position. If you are a proactive problem-solver with excellent analytical skills and strong communication abilities, this role offers you the opportunity to stay abreast of emerging technologies and industry best practices in data engineering, ETL, and streaming. Join us at DataFlow Group and be part of a team dedicated to making informed, cost-effective decisions through cutting-edge data solutions.,
Posted 2 months ago
10.0 - 14.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Senior ETL & Data Streaming Engineer at DataFlow Group, a global provider of Primary Source Verification solutions and background screening services, you will be a key player in the design, development, and maintenance of robust data pipelines. With over 10 years of experience, you will leverage your expertise in both batch ETL processes and real-time data streaming technologies to ensure efficient data extraction, transformation, and loading into our Data Lake and Data Warehouse. Your responsibilities will include designing and implementing highly scalable ETL processes using industry-leading tools, as well as architecting batch and real-time data streaming solutions with technologies like Talend, Informatica, Apache Kafka, or AWS Kinesis. You will collaborate with data architects, data scientists, and business stakeholders to understand data requirements and translate them into effective pipeline solutions, ensuring data quality, integrity, and security across all storage solutions. Monitoring, troubleshooting, and optimizing existing data pipelines for performance, cost-efficiency, and reliability will be a crucial part of your role. Additionally, you will develop comprehensive documentation for all ETL and streaming processes, contribute to data governance policies, and mentor junior engineers to foster a culture of technical excellence and continuous improvement. To excel in this position, you should have 10+ years of progressive experience in data engineering, with a focus on ETL, ELT, and data pipeline development. Your deep expertise in ETL tools like Talend, proficiency in Data Streaming Technologies such as AWS Glue and Apache Kafka, and extensive experience with AWS data services like S3, Glue, and Lake Formation will be essential. Strong knowledge of traditional data warehousing concepts, dimensional modeling, programming languages like SQL and Python, and relational and NoSQL databases will also be required. If you are a problem-solver with excellent analytical skills, strong communication abilities, and a passion for staying updated on emerging technologies and industry best practices in data engineering, ETL, and streaming, we invite you to join our team at DataFlow Group and make a significant impact in the field of data management.,
Posted 2 months ago
0.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
Lets do this. Lets change the world. In this vital role you will join a multi-functional team of scientists and software professionals that enables technology and data capabilities to evaluate drug candidates and assess their abilities to affect the biology of drug targets. This team implements scientific software platforms that enable the capture, analysis, storage, and report of in vitro assays and in vivo / pre-clinical studies as well as those that manage compound inventories / biological sample banks. The ideal candidate possesses experience in the pharmaceutical or biotech industry, strong technical skills, and full stack software engineering experience (spanning SQL, back-end, front-end web technologies, automated testing). Roles & Responsibilities: Design, develop, and implement applications and modules, including custom reports, interfaces, and enhancements Analyze and understand the functional and technical requirements of applications, solutions and systems and translate them into software architecture and design specifications Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software Identify and resolve software bugs and performance issues Work closely with cross-functional teams, including product management, design, and QA, to deliver high-quality software on time Maintain documentation of software designs, code, and development processes Customize modules to meet specific business requirements Work on integrating with other systems and platforms to ensure seamless data flow and functionality Provide ongoing support and maintenance for applications, ensuring that they operate smoothly and efficiently Contribute to both front-end and back-end development using cloud technology Develop innovative solution using generative AI technologies Identify and resolve technical challenges effectively Work closely with product team, business team including scientists, and other stakeholders What we expect of you We are all different, yet we all use our unique contributions to serve patients. The [vital attribute] professional we seek is a [type of person] with these qualifications. Basic Qualifications: Bachelors degree and 0 to 3 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field OR Diploma and 4 to 7 years of experience in Computer Science, IT, Computational Chemistry, Computational Biology/Bioinformatics or related field Preferred Qualifications: Experience in implementing and supporting biopharma scientific software platforms Functional Skills: Must-Have Skills: Proficient in a General Purpose High Level Language (e.g. Python, Java, C#.NET) Proficient in a Javascript UI Framework (e.g. React, ExtJs) Proficient in a SQL (e.g. Oracle, PostGres, Databricks) Experience with event-based architecture (e.g. Mulesoft, AWS EventBridge, AWS Kinesis, Kafka) Good-to-Have Skills: Strong understanding of software development methodologies, mainly Agile and Scrum Hands-on experience with Full Stack software development Strong understanding of cloud platforms (e.g AWS) and containerization technologies (e.g., Docker, Kubernetes) Working experience with DevOps practices and CI/CD pipelines Experience with big data technologies (e.g., Spark, Databricks) Experience with API integration, serverless, microservices architecture (e.g. Mulesoft, AWS Kafka) Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk) Experience of infrastructure as code (IaC) tools (Terraform, CloudFormation) Experience with version control systems like Git Experience with automated testing tools and frameworks Experience with Benchling Professional Certifications (please mention if the certification is preferred or mandatory for the role): AWS Certified Cloud Practitioner preferred Soft Skills: Excellent problem solving, analytical, and troubleshooting skills Strong communication and interpersonal skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to learn quickly & work independently Team-oriented, with a focus on achieving team goals Ability to manage multiple priorities successfully Strong presentation and public speaking skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, well support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com,
Posted 2 months ago
7.0 - 11.0 years
15 - 30 Lacs
Noida
Remote
Job Title: IoT Solutions Architect (MQTT/HiveMQ) Consultant Location: 100% Remote Notes: Consumer goods and Manufacturing experience are highly preferred. Comfortable to work on the US Timezone Job Description The consultant will be working on a new MQTT/Hive MQ setup. IoT smart manufacturing project. Cloud platform - Azure Must have 3-4 years of experience in the field in manufacturing solutions and minimum 2 years of HiveMQ - Sequel, cloud/edge integrations experience. These are the skills required: Expertise in MQTT Protocols: Deep understanding of MQTT 3.1.1 and MQTT 5.0, including advanced features like QoS levels, retained messages, session expiry, and shared subscriptions. HiveMQ Platform Proficiency: Hands-on experience with HiveMQ broker setup, configuration, clustering, and deployment (on-premises, cloud, or Kubernetes). Edge-to-Cloud Integration: Ability to design and implement solutions that bridge OT (Operational Technology) and IT systems using MQTT. Sparkplug B Knowledge: Familiarity with Sparkplug B for contextual MQTT data in IIoT environments. Enterprise Integration: Experience with HiveMQ Enterprise Extensions (e.g., Kafka, Google Cloud Pub/Sub, AWS Kinesis, PostgreSQL, MongoDB, Snowflake). Security Implementation: Knowledge of securing MQTT deployments using HiveMQ Enterprise Security Extension (authentication, authorization, TLS, etc.). Custom Extension Development: Ability to develop and deploy custom HiveMQ extensions using the open-source SDK. Development & Scripting MQTT Client Libraries: Proficiency in using MQTT client libraries (e.g., Eclipse Paho, HiveMQ MQTT Client) in languages like Java, Python, or JavaScript. MQTT CLI: Familiarity with the MQTT Command Line Interface for testing and debugging. Scripting & Automation: Ability to automate deployment and testing using tools like HiveMQ Swarm. Soft Skills & Experience Must have 3-4 years of experience in the field in manufacturing solutions and minimum 2 years of HiveMQ - Sequel, cloud/edge integrations experience. IoT/IIoT Project Experience: Proven track record in implementing MQTT-based IoT solutions. Problem Solving & Debugging: Strong analytical skills to troubleshoot MQTT communication and broker issues. Communication & Documentation: Ability to clearly document architecture, configurations, and best practices for clients. Interested Candidate can apply : dsingh15@fcsltd.com
Posted 2 months ago
5.0 - 8.0 years
11 - 12 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
We are looking for an experienced Project Manager with a strong foundation in software development who can effectively engage with clients and lead project teams. The ideal candidate will have hands-on experience in managing complex software projects, specifically in IoT and mobile app integration. This role involves overseeing the development of a baby monitoring app and its integration with camera systems using AWS IoT Core and AWS Kinesis Video Streams (KVS). The candidate will handle the complete project lifecycle, ensuring delivery within budget, scope, and time. Responsibilities include project governance, defining SLAs, writing Statements of Work (SoW), and converting business requirements into actionable use cases and user stories. Additionally, the Project Manager will manage and mentor the project team, foster a collaborative environment, and ensure adherence to Agile methodologies. Location - Remote, Delhi NCR, Bangalore, Chennai, Pune, Kolkata, Ahmedabad, Mumbai, Hyderabad
Posted 2 months ago
5.0 - 10.0 years
0 - 1 Lacs
Hyderabad, Pune, Ahmedabad
Hybrid
Contractual (Project-Based) Notice Period: Immediate - 15 Days Fill this form: https://forms.office.com/Pages/ResponsePage.aspx?id=hLjynUM4c0C8vhY4bzh6ZJ5WkWrYFoFOu2ZF3Vr0DXVUQlpCTURUVlJNS0c1VUlPNEI3UVlZUFZMMC4u Resume- shweta.soni@panthsoftech.com
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |