Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
10.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Chennai, India The Opportunity Anthology delivers education and technology solutions so that students can reach their full potential and learning institutions can thrive. Our mission is to empower educators and institutions with meaningful innovation that’s simple and intelligent, inspiring student success and institutional growth. The Power of Together is built on having a diverse and inclusive workforce. We are committed to making diversity, inclusion, and belonging a foundational part of our hiring practices and who we are as a company. For more information about Anthology and our career opportunities, please visit www.anthology.com. This role focuses on Anthology Ally, a revolutionary product that makes digital course content more accessible. As the accessibility of digital course content becomes increasingly important worldwide, institutions must address long-standing and often overbearing challenges. Anthology’s Ally engineering team is responsible for developing industry-leading tools to improve accessibility through inclusivity, sustainability, and automation for all students. As a Staff Software Engineer on our team, you will design, develop, and maintain features of the Ally product. You’ll also communicate and partner cross-functionally with teams in product and software development. In this role, you will work on an ethical product, using Scala for the backend and JavaScript for the frontend. We run our applications in the AWS cloud and use Git for version control. You’ll work on a distributed team, collaborating with colleagues around the globe. The Candidate Required skills/qualifications: 10-12 years of relevant experience Good abstract and critical thinking skills Familiarity with the full-cycle development process Experience developing, building, testing, deploying, and operating applications Experience working with cloud technologies Awareness of how distributed systems work Strong command of backend programming languages (Java, JavaScript, Python, etc.) Familiarity with relational database design and querying concepts Willingness to break things and make them work again Knowledge of and experience with CI/CD principles and tools (Jenkins or Azure Pipelines) Fluency in written and spoken English Preferred Skills/qualifications Experience leading a team Command line scripting knowledge in a Linux-like environment Knowledge of cloud computing (AWS) Experience with IntelliJ IDEA (or other IDE) Experience with a version control system (Git) Experience with a bug-tracking system (JIRA) Experience with a continuous integration system and continuous delivery practices Functional programming experience such as Haskell or Scala Experience with front-end development or interest in learning (Angular) This job description is not designed to contain a comprehensive listing of activities, duties, or responsibilities that are required. Nothing in this job description restricts management's right to assign or reassign duties and responsibilities at any time. Anthology is an equal employment opportunity/affirmative action employer and considers qualified applicants for employment without regard to race, gender, age, color, religion, national origin, marital status, disability, sexual orientation, gender identity/expression, protected military/veteran status, or any other legally protected factor. Show more Show less
Posted 13 hours ago
3.0 - 4.0 years
4 - 12 Lacs
Hyderābād
On-site
Job Description: Summary The Data Engineer will be responsible for designing, developing, and maintaining the data infrastructure for a healthcare organization. The ideal candidate will have experience in working with healthcare data, including EHR, HIMS, PACS, and RIS. They will also have experience with SQL, Elasticsearch, and data integration tools such as Talend. Key Responsibilities: Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Microsoft Fabric. Data Integration: Integrate data from various sources, ensuring data quality and consistency. Data Transformation: Perform data cleaning, transformation, and aggregation to support analytics and reporting. Performance Optimization: Optimize data processing workflows for performance and scalability. Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Documentation: Create and maintain documentation for data processes, workflows, and infrastructure. Required Skills and Qualifications: Experience: 3-4 years of experience in data engineering or related field. Technical Skills: Proficiency in Microsoft Fabric and its components. Strong knowledge of SQL and database management systems. Experience with big data technologies (e.g., Spark, Hadoop). Familiarity with data warehousing concepts and ETL processes. Programming Skills: Proficiency in programming languages such as Python, Java, or Scala. Python will be preferable. Analytical Skills: Strong problem-solving skills and ability to analyze complex data sets. Communication Skills: Excellent verbal and written communication skills. Preferred Qualifications: Certifications: Relevant certifications in data engineering or Microsoft technologies. Experience: Experience with cloud platforms. Working in Azure is a must. Tools: Familiarity with data visualization tools (e.g., Power BI, Tableau). Job Types: Full-time, Permanent Pay: ₹400,000.00 - ₹1,200,000.00 per year Benefits: Flexible schedule Health insurance Paid time off Schedule: Day shift Monday to Friday Experience: Data Engineer: 3 years (Preferred) SQL: 2 years (Preferred) Python: 2 years (Preferred) ETL: 2 years (Preferred) Spark: 2 years (Preferred) Azure: 2 years (Preferred) Work Location: In person
Posted 14 hours ago
3.0 years
0 Lacs
Mumbai
Remote
Experience: 3 to 4 Years Y Location: Mumbai, Maharashtra India Openings: 2 Job description: Key responsibilities: Apply design and data analysis techniques to organize the presentation of data in innovative ways, collaborate with research analysts to identify the best means of visually depicting a story Design and Develop custom dashboard solutions, as well as re-usable data visualization templates Analyze data, and identify trends and discover insights that will guide strategic leadership decisions On daily practise use JavaScript, Tableau, QlikView, QlikSense, SAS Visual Analytics, PowerBI, Dashboard design/development. Desired Qualifications: M.sc or PhD in corresponding fields; Hands-on experience of programming languages (e.g., Python, Java, Scala) and/ or Big Data systems (like Hadoop, Spark, Storm); Experience with Linux, Unix shell scripting, noSQL, Machine Learning; Knowledge and experience with cloud environments like AWS/Azure/GCP; Knowledge of Scrum, Agile. Requirement: Required Qualifications: Experience with visual reports and dynamic dashboards design and development on platforms like Tableau, Qlik, PowerBI, SAS, or CRM Analytics. Experience with SQL, ETL, data warehousing, BI. Knowledge of Big Data. Strong verbal and written communication skills in English. Benefits: Competitive salary 2625 – 4500 EUR gross Flexible vacation + health & travel insurance + relocation Work from home, flexible working hours Work with Fortune 500 companies from different industries all over the world Skills development and training opportunities, company-paid certifications Opportunities to advance career An open-minded and inclusive company culture Role: Visualization Expert Department: UI/UX Education: Bachelor’s Degree from Computer Science, Statistics, Applied Mathematics, or another related field
Posted 14 hours ago
3.0 years
0 Lacs
Mumbai
Remote
Experience: 3 to 4 Years Y Location: Mumbai Openings: 1 Job description: We are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights. In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine-learning and research. Your goal will be to help our company analyze trends to make better decisions. Requirement: Identify valuable data sources and automate collection processes Undertake preprocessing of structured and unstructured data Analyze large amounts of information to discover trends and patterns Build predictive models and machine-learning algorithms Combine models through ensemble modeling Present information using data visualization techniques Propose solutions and strategies to business challenges Collaborate with engineering and product development teams Skills: Proven experience as a Data Scientist. Experience in data mining Understanding of machine-learning and operations research Knowledge of R, SQL and Python; familiarity with Scala, Java or C++ is an asset Experience using business intelligence tools (e.g. Tableau) and data frameworks (e.g. Hadoop) Analytical mind and business acumen Strong math skills (e.g. statistics, algebra) Problem-solving aptitude Excellent communication and presentation skills BSc/BA in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative field is preferred Benefits: Work from home, flexible working hours Skills development and training opportunities, company-paid certifications Opportunities to advance career An open-minded and inclusive company culture Role: Data Scientist Department: Software Engineer Education: B.Tech
Posted 14 hours ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We deliver the world’s most complex projects. Work as part of a collaborative and inclusive team. Enjoy a varied & challenging role. Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, we’re bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals, and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role As a Digital Solutions Consultant with Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc. We are looking for a skilled Data Engineer to join our Digital Customer Solutions team. The ideal candidate should have experience in cloud computing and big data technologies. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data solutions that can handle large volumes of data. You will work closely with stakeholders to ensure that the data is accurate, reliable, and easily accessible. Responsibilities Design, build, and maintain scalable data pipelines that can handle large volumes of data. Document design of proposed solution including structuring data (data modelling applying different techniques including 3-NF and Dimensional modelling) and optimising data for further consumption (working closely with Data Visualization Engineers, Front-end Developers, Data Scientists and ML-Engineers). Develop and maintain ETL processes to extract data from various sources (including sensor, semi-structured and unstructured, as well as structured data stored in traditional databases, file stores or from SOAP and REST data interfaces). Develop data integration patterns for batch and streaming processes, including implementation of incremental loads. Build quick porotypes and prove-of-concepts to validate assumption and prove value of proposed solutions or new cloud-based services. Define Data engineering standards and develop data ingestion/integration frameworks. Participate in code reviews and ensure all solutions are lined to architectural and requirement specifications. Develop and maintain cloud-based infrastructure to support data processing using Azure Data Services (ADF, ADLS, Synapse, Azure SQL DB, Cosmos DB). Develop and maintain automated data quality pipelines. Collaborate with cross-functional teams to identify opportunities for process improvement. Manage a team of Data Engineers. About You To be considered for this role it is envisaged you will possess the following attributes: Bachelor’s degree in Computer Science or related field. 7+ years of experience in big data technologies such as Hadoop, Spark, Hive & Delta Lake. 7+ years of experience in cloud computing platforms such as Azure, AWS or GCP. Experience in working in cloud Data Platforms, including deep understanding of scaled data solutions. Experience in working with different data integration patterns (batch and streaming), implementing incremental data loads. Proficient in scripting in Java, Windows and PowerShell. Proficient in at least one programming language like Python, Scala. Expert in SQL. Proficient in working with data services like ADLS, Azure SQL DB, Azure Synapse, Snowflake, No-SQL (e.g. Cosmos DB, Mongo DB), Azure Data Factory, Databricks or similar on AWS/GCP. Experience in using ETL tools (like Informatica IICS Data integration) is an advantage. Strong understanding of Data Quality principles and experience in implementing those. Moving forward together We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. We’re building a diverse, inclusive and respectful workplace. Creating a space where everyone feels they belong, can be themselves, and are heard. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Worley takes personal data protection seriously and respects EU and local data protection laws. You can read our full Recruitment Privacy Notice Here. Please note: If you are being represented by a recruitment agency you will not be considered, to be considered you will need to apply directly to Worley. Company Worley Primary Location IND-AP-Hyderabad Job Digital Solutions Schedule Full-time Employment Type Agency Contractor Job Level Experienced Job Posting Jun 16, 2025 Unposting Date Jul 16, 2025 Reporting Manager Title Senior General Manager Show more Show less
Posted 14 hours ago
3.0 years
4 - 10 Lacs
Chennai
On-site
- 3+ years of Excel or Tableau (data manipulation, macros, charts and pivot tables) experience - Experience defining requirements and using data and metrics to draw business insights - Experience with SQL or ETL - Knowledge of data visualization tools such as Quick Sight, Tableau, Power BI or other BI packages - 1+ years of tax, finance or a related analytical field experience Are you passionate about solving business challenges at a global scale? Amazon Employee Services is looking for an experienced Business Analyst to join Retail Business Services team and help unlock insights which take our business to the next level. The candidate will be excited about understanding and implementing new and repeatable processes to improve our employee global work authorization experiences. They will do this by partnering with key stakeholders to be curious and comfortable digging deep into the business challenges to understand and identify insights that will enable us to figure out standards to improve our ability to globally scale this program. They will be comfortable delivering/presenting these recommended solutions by retrieving and integrating artifacts in a format that is immediately useful to improve the business decision-making process. This role requires an individual with excellent analytical abilities as well as an outstanding business acumen. The candidate knows and values our customers (internal and external) and will work back from the customer to create structured processes for global expansions of work authorization, and help integrate new countries/new acquisitions into the existing program. They are experts in partnering and earning trust with operations/business leaders to drive these key business decisions. Responsibilities: - Own the development and maintenance of new and existing artifacts focused on analysis of requirements, metrics, and reporting dashboards. - Partner with operations/business teams to consult, develop and implement KPI’s, automated reporting/process solutions, and process improvements to meet business needs. - Enable effective decision making by retrieving and aggregating data from multiple sources and compiling it into a digestible and actionable format. - Prepare and deliver business requirements reviews to the senior management team regarding progress and roadblocks. - Participate in strategic and tactical planning discussions. - Design, develop and maintain scaled, automated, user-friendly systems, reports, dashboards, etc. that will support our business needs. - Excellent writing skills, to create artifacts easily digestible by business and tech partners. Key job responsibilities • Design and develop highly available dashboards and metrics using SQL and Excel/Tableau/QuickSight • Understand the requirements of stakeholders and map them with the data sources/data warehouse • Own the delivery and backup of periodic metrics, dashboards to the leadership team • Draw inferences and conclusions, and create dashboards and visualizations of processed data, identify trends, anomalies • Execute high priority (i.e. cross functional, high impact) projects to improve operations performance with the help of Operations Analytics managers • Perform business analysis and data queries using appropriate tools • Work closely with internal stakeholders such as business teams, engineering teams, and partner teams and align them with respect to your focus area Experience in Amazon Redshift and other AWS technologies Experience creating complex SQL queries joining multiple datasets, ETL DW concepts Experience in SCALA and Pyspark Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 14 hours ago
0 years
0 Lacs
India
Remote
Future-Able is looking for a Data Engineer, a full-time contract role, to work for: Naked & Thriving - An organic, botanical skincare brand committed to creating high-performing, naturally derived products that are as kind to the planet as they are to your skin. Our mission is to empower individuals to embrace sustainable self-care while nurturing their natural beauty. Every product we craft reflects our dedication to sustainability, quality, and transparency, ensuring our customers feel confident with every choice they make. As we rapidly grow and expand into new categories, channels, and countries, customer satisfaction remains our top priority. Job Summary: We are seeking a Data Engineer with expertise in Python, exposure to AI & Machine Learning, and a strong understanding of eCommerce analytics to design, develop, and optimize data pipelines. The ideal candidate will work on Google Cloud infrastructure, enabling advanced insights using Google Analytics (GA4). What You Will Do: ● Develop & maintain scalable data pipelines to support analytics and AI-driven models. ● Work with Python (or equivalent programming language) for data processing and transformation. ● Implement AI & Machine Learning techniques for predictive analytics and automation. ● Optimize eCommerce data insights using GA4 and Google Analytics to drive business decisions. ● Build cloud-based data infrastructure leveraging Google Cloud services like BigQuery, Pub/Sub, and Dataflow. ● Ensure data integrity and governance across structured and unstructured datasets. ● Collaborate with cross-functional teams including product managers, analysts, and marketing professionals. ● Monitor & troubleshoot data pipelines to ensure smooth operation and performance. We are looking for: ● Proficiency in Python or a similar language (e.g., Scala). ● Experience with eCommerce analytics and tracking frameworks. ● Expertise in Google Analytics & GA4 for data-driven insights. ● Knowledge of Google Cloud Platform (GCP), including BigQuery, Cloud Functions, and Dataflow. ● Experience in designing, building, and optimizing data pipelines using ETL frameworks. ● Familiarity with data warehousing concepts and SQL-based query optimization. ● Strong problem-solving and communication skills in a fast-paced environment. What will make you stand out: ● Experience with event-driven architecture for real-time data processing. ● Understanding of marketing analytics and attribution modeling. ● Previous work in a high-growth eCommerce environment. ● Exposure of AI & Machine Learning concepts and model deployment. Benefits: ● USD Salary. ● Fully Remote Work. ● USD 50 for health insurance payment. ● 30 days of pay time off per year. ● The possibility of being selected for annual bonuses based on business performance and personal achievements. Show more Show less
Posted 14 hours ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description Are you passionate about solving business challenges at a global scale? Amazon Employee Services is looking for an experienced Business Analyst to join Retail Business Services team and help unlock insights which take our business to the next level. The candidate will be excited about understanding and implementing new and repeatable processes to improve our employee global work authorization experiences. They will do this by partnering with key stakeholders to be curious and comfortable digging deep into the business challenges to understand and identify insights that will enable us to figure out standards to improve our ability to globally scale this program. They will be comfortable delivering/presenting these recommended solutions by retrieving and integrating artifacts in a format that is immediately useful to improve the business decision-making process. This role requires an individual with excellent analytical abilities as well as an outstanding business acumen. The candidate knows and values our customers (internal and external) and will work back from the customer to create structured processes for global expansions of work authorization, and help integrate new countries/new acquisitions into the existing program. They are experts in partnering and earning trust with operations/business leaders to drive these key business decisions. Responsibilities Own the development and maintenance of new and existing artifacts focused on analysis of requirements, metrics, and reporting dashboards. Partner with operations/business teams to consult, develop and implement KPI’s, automated reporting/process solutions, and process improvements to meet business needs. Enable effective decision making by retrieving and aggregating data from multiple sources and compiling it into a digestible and actionable format. Prepare and deliver business requirements reviews to the senior management team regarding progress and roadblocks. Participate in strategic and tactical planning discussions. Design, develop and maintain scaled, automated, user-friendly systems, reports, dashboards, etc. that will support our business needs. Excellent writing skills, to create artifacts easily digestible by business and tech partners. Key job responsibilities Design and develop highly available dashboards and metrics using SQL and Excel/Tableau/QuickSight Understand the requirements of stakeholders and map them with the data sources/data warehouse Own the delivery and backup of periodic metrics, dashboards to the leadership team Draw inferences and conclusions, and create dashboards and visualizations of processed data, identify trends, anomalies Execute high priority (i.e. cross functional, high impact) projects to improve operations performance with the help of Operations Analytics managers Perform business analysis and data queries using appropriate tools Work closely with internal stakeholders such as business teams, engineering teams, and partner teams and align them with respect to your focus area Basic Qualifications 3+ years of Excel or Tableau (data manipulation, macros, charts and pivot tables) experience Experience defining requirements and using data and metrics to draw business insights Experience with SQL or ETL Knowledge of data visualization tools such as Quick Sight, Tableau, Power BI or other BI packages 1+ years of tax, finance or a related analytical field experience Preferred Qualifications Experience in Amazon Redshift and other AWS technologies Experience creating complex SQL queries joining multiple datasets, ETL DW concepts Experience in SCALA and Pyspark Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - BLR 14 SEZ Job ID: A3009262 Show more Show less
Posted 14 hours ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Are you passionate about solving business challenges at a global scale? Amazon Employee Services is looking for an experienced Business Analyst to join Retail Business Services team and help unlock insights which take our business to the next level. The candidate will be excited about understanding and implementing new and repeatable processes to improve our employee global work authorization experiences. They will do this by partnering with key stakeholders to be curious and comfortable digging deep into the business challenges to understand and identify insights that will enable us to figure out standards to improve our ability to globally scale this program. They will be comfortable delivering/presenting these recommended solutions by retrieving and integrating artifacts in a format that is immediately useful to improve the business decision-making process. This role requires an individual with excellent analytical abilities as well as an outstanding business acumen. The candidate knows and values our customers (internal and external) and will work back from the customer to create structured processes for global expansions of work authorization, and help integrate new countries/new acquisitions into the existing program. They are experts in partnering and earning trust with operations/business leaders to drive these key business decisions. Responsibilities Own the development and maintenance of new and existing artifacts focused on analysis of requirements, metrics, and reporting dashboards. Partner with operations/business teams to consult, develop and implement KPI’s, automated reporting/process solutions, and process improvements to meet business needs. Enable effective decision making by retrieving and aggregating data from multiple sources and compiling it into a digestible and actionable format. Prepare and deliver business requirements reviews to the senior management team regarding progress and roadblocks. Participate in strategic and tactical planning discussions. Design, develop and maintain scaled, automated, user-friendly systems, reports, dashboards, etc. that will support our business needs. Excellent writing skills, to create artifacts easily digestible by business and tech partners. Key job responsibilities Design and develop highly available dashboards and metrics using SQL and Excel/Tableau/QuickSight Understand the requirements of stakeholders and map them with the data sources/data warehouse Own the delivery and backup of periodic metrics, dashboards to the leadership team Draw inferences and conclusions, and create dashboards and visualizations of processed data, identify trends, anomalies Execute high priority (i.e. cross functional, high impact) projects to improve operations performance with the help of Operations Analytics managers Perform business analysis and data queries using appropriate tools Work closely with internal stakeholders such as business teams, engineering teams, and partner teams and align them with respect to your focus area Basic Qualifications 3+ years of Excel or Tableau (data manipulation, macros, charts and pivot tables) experience Experience defining requirements and using data and metrics to draw business insights Experience with SQL or ETL Knowledge of data visualization tools such as Quick Sight, Tableau, Power BI or other BI packages 1+ years of tax, finance or a related analytical field experience Preferred Qualifications Experience in Amazon Redshift and other AWS technologies Experience creating complex SQL queries joining multiple datasets, ETL DW concepts Experience in SCALA and Pyspark Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - BLR 14 SEZ Job ID: A3009262 Show more Show less
Posted 14 hours ago
5.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Penetration Tester Role: The Penetration Tester, will provide broad and in depth knowledge to conduct offensive cyber operations across the organization globally. In this role, you will conduct offensive security operations to emulate adversary tactics and procedures to test preventative, detective and response controls across the global technology landscape. You will use your expertise to help influence technology decisions and work as part of a team to create consistent approaches to the offensive security processes and techniques. Penetration Testing Duties and Responsibilities: Operate a hands-on role involving penetration testing and vulnerability assessment activities of complex applications, operating systems, wired, wireless networks, and mobile applications/devices, Cloud(Azure, AWS, Google Etc) apps and software’s. Set up environment and maintain required tools needed for the team. Lead and manage Penetration Testing team and Supporting vendors to get qualitative deliveries to our customer. Develop and maintain security testing plans Able to automate penetration and other security testing on networks, systems and applications. Develop meaningful metrics to reflect the true posture of the environment allowing the organization to make educated decisions based on risk. Produce actionable, threat-based, reports on security testing results Act as a source of direction, training, and guidance for less experienced staff Consult with application developers, systems administrators, and management to demonstrate security testing results, explain the threat presented by the results, and consult on remediation Communicate security issues to a wide variety of internal and external “customers” to include technical teams, executives, risk groups, vendors and regulators Deliver the annual penetration testing schedule and conducting awareness campaigns to ensure proper budgeting by business lines for annual tests. Foster and maintain relationships with key stakeholders and business partners Certificates: Must Have Offensive Security Certified Professional (OSCP) Good to have CREST Registered Penetration Tester (CRT) Certified Ethical Hacker (CEH) Certification GIAC Certified Penetration Tester (GPEN) Penetration Testing Expert Requirements and Qualification: Previous working experience as a Penetration Testing Expert for 5 - 7 year BE in Computer Information Systems, Management Information Systems, or similar relevant field In-depth knowledge of application development processes and at least one programing or scripting language (e.g., Java, Scala, C#, Ruby, Perl, Python, PowerShell) Must know about standard Industry security Practices (OWASP, SANS, etc), Knowledgeable about industry Security guidelines and compliance such as ISO27001, SOC2, HIPPA etc. Hands on experience with testing frameworks such as the PTES and OWASP. Applicable knowledge of Windows client/server, Unix/Linux systems, Mac OS X, VMware/Xen, and cloud technologies such as AWS, Azure, or Google Cloud Critical thinker and problem solver Excellent organizational and time management skills Show more Show less
Posted 14 hours ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Total 6 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred Technical And Professional Experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 15 hours ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Role: Data Engineering Lead Experience: 7-10 Years Location: Hyderabad We need immediate joiners only (Max. 15 days) This is work from office role-5 days (No Hybrid/ Remote opportunities) We are looking for candidates with strong experience in data architecture About company: We provides companies with innovative technology solutions for everyday business problems. Our passion is to help clients become intelligent, information-driven organizations, where fact-based decision-making is embedded into daily operations, which leads to better processes and outcomes. Our team combines strategic consulting services with growth-enabling technologies to evaluate risk, manage data, and leverage AI and automated processes more effectively. With deep, big four consulting experience in business transformation and efficient processes . Job Description: We are looking for an accomplished and dynamic Data Engineering Lead to join our team and drive the design, development, and delivery of cutting-edge data solutions. This role requires a balance of strong technical expertise, strategic leadership, and a consulting mindset. As the Lead Data Engineer, you will oversee the design and development of robust data pipelines and systems, manage and mentor a team of 5 to 7 engineers, and play a critical role in architecting innovative solutions tailored to client needs. You will lead by example, fostering a culture of accountability, ownership, and continuous improvement while delivering impactful, scalable data solutions in a fast-paced, consulting environment. Key Responsibilities:- Client Collaboration Act as the primary point of contact for US-based clients, ensuring alignment on project goals, timelines, and deliverables. Engage with stakeholders to understand requirements and ensure alignment throughout the project lifecycle. Present technical concepts and designs to both technical and non-technical audiences. Communicate effectively with stakeholders to ensure alignment on project goals, timelines, and deliverables. Set realistic expectations with clients and proactively address concerns or risks. Data Solution Design And Development Architect, design, and implement end-to-end data pipelines and systems that handle large-scale, complex datasets. Ensure optimal system architecture for performance, scalability, and reliability. Evaluate and integrate new technologies to enhance existing solutions. Implement best practices in ETL/ELT processes, data integration, and data warehousing. Project Leadership And Delivery Lead technical project execution, ensuring timelines and deliverables are met with high quality. Collaborate with cross-functional teams to align business goals with technical solutions. Act as the primary point of contact for clients, translating business requirements into actionable technical strategies. Team Leadership And Development Manage, mentor, and grow a team of 5 to 7 data engineers; Ensure timely follow-ups on action items and maintain seamless communication across time zones. Conduct code reviews, validations, and provide feedback to ensure adherence to technical standards. Provide technical guidance and foster an environment of continuous learning, innovation, and collaboration. Support collaboration and alignment between the client and delivery teams. Optimization And Performance Tuning Be hands-on in developing, testing, and documenting data pipelines and solutions as needed. Analyze and optimize existing data workflows for performance and cost-efficiency. Troubleshoot and resolve complex technical issues within data systems. Adaptability And Innovation Embrace a consulting mindset with the ability to quickly learn and adopt new tools, technologies, and frameworks. Identify opportunities for innovation and implement cutting-edge technologies in data engineering. Exhibit a "figure it out" attitude, taking ownership and accountability for challenges and solutions. Learning And Adaptability Stay updated with emerging data technologies, frameworks, and tools. Actively explore and integrate new technologies to improve existing workflows and solutions. Internal Initiatives And Eminence Building Drive internal initiatives to improve processes, frameworks, and methodologies. Contribute to the organization’s eminence by developing thought leadership, sharing best practices, and participating in knowledge-sharing activities. Qualifications Education Bachelor’s or master’s degree in computer science, Data Engineering, or a related field. Certifications in cloud platforms such as Snowflake Snowpro, Data Engineer is a plus. Experience 8+ years of experience in data engineering with hands-on expertise in data pipeline development, architecture, and system optimization Demonstrated success in managing global teams, especially across US and India time zones. Proven track record in leading data engineering teams and managing end-to-end project delivery. Strong background in data warehousing and familiarity with tools such as Matillion, dbt, Striim, etc. Technical Skills Lead the design, development, and deployment of scalable data architectures, pipelines, and processes tailored to client needs Expertise in programming languages such as Python, Scala, or Java. Proficiency in designing and delivering data pipelines in Cloud Data Warehouses (e.g., Snowflake, Redshift), using various ETL/ELT tools such as Matillion, dbt, Striim, etc. Solid understanding of database systems (relational and NoSQL) and data modeling techniques. Hands-on experience of 2+ years in designing and developing data integration solutions using Matillion and/or dbt. Strong knowledge of data engineering and integration frameworks. Expertise in architecting data solutions. Successfully implemented at least two end-to-end projects with multiple transformation layers. Good grasp of coding standards, with the ability to define standards and testing strategies for projects. Proficiency in working with cloud platforms (AWS, Azure, GCP) and associated data services. Enthusiastic about working in Agile methodology. Possess a comprehensive understanding of the DevOps process including GitHub integration and CI/CD pipelines. Soft Skills Exceptional problem-solving and analytical skills. Strong communication and interpersonal skills to manage client relationships and team dynamics. Ability to thrive in a consulting environment, quickly adapting to new challenges and domains. Ability to handle ambiguity and proactively take ownership of challenges. Demonstrated accountability, ownership, and a proactive approach to solving problems. Why Join Us? Be at the forefront of data innovation and lead impactful projects. Work with a collaborative and forward-thinking team. Opportunity to mentor and develop talent in the data engineering space. Competitive compensation and benefits package. Skills: etl/elt processes,cloud platforms (aws, azure, gcp),data pipeline development,python,sql, nosql & data modeling,data modeling techniques,data engineering,data warehousing,programming languages (python, scala, java),devops process,ci/cd pipelines,data integration,system optimization,azure,agile methodology,github integration,data architecture,aws Show more Show less
Posted 15 hours ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Overview: The person will be responsible for expanding and optimizing our data and data pipeline architecture. The ideal candidate is an experienced data pipeline builder who enjoys optimizing data systems and building them from the ground up. You’ll be Responsible for ? Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Cloud technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. You’d have? We are looking for a candidate with 3+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with data pipeline and workflow management tools: Apache Airflow, NiFi, Talend etc. • Experience with relational SQL and NoSQL databases, including Clickhouse, Postgres and MySQL. Experience with stream-processing systems: Storm, Spark-Streaming, Kafka etc. Experience with object-oriented/object function scripting languages: Python, Scala, etc. Experience building and optimizing data pipelines, architectures and data sets. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Working knowledge of message queuing, stream processing, and highly scalable data stores Why Join us? Impactful Work: Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment: Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employees. www.tanla.com Show more Show less
Posted 15 hours ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 15 hours ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Role Description Role Proficiency: Develops applications and participates in maintenance. Reuses proven solutions contributes to research and development activities with some guidance from Product Engineer I Outcomes Understand product requirements and user stories from customer discovery process Ensure requirements coverage of complex features with unit test cases Troubleshoots development and production problems across multiple environments and operating platforms Ensure code coverage and code quality by actively participating in code reviews Ensure adherence to best practices and standards with provision of periodic status updates Execute test cases and prepare release documents in line with the requirements Improve customer satisfaction Measures Of Outcomes 100% adherence to engineering process and standards (coding standards) with minimal or no code review comments w.r.t given guidelines. 100% adherence to project schedule / timelines and effort estimation Number of technical issues uncovered during the execution of the project Number of code defects Number of defects post delivery Number of non-compliance issues Quarterly/timely completion of mandatory domain/technical certifications Outputs Expected Product Requirements: Understand the functional/ non-functional requirements gathered from the stakeholders (Architect Product Manager Product Lead Client) for enhancement Seek review inputs from the Architect/Leads and incorporate same Product Design And Development Contribute to product development following SLA and delivery timelines Create POCs to identify the feasibility of new technologies/ products; share the findings with the architecture team for new products Seek review inputs from the product specialist Build code per the design document; ensuring compliance on defined standards/guidelines Support the client in user acceptance testing if required Ensure code quality and 100% code coverage. Product Testing Review Test Cases / Test Plan; conduct integration testing and resolve defects/bugs Product Training And Documentation Provide inputs to technical publications and review documentation of key features as required. Product Sign Offs Resolve existing issues Project Management Provide inputs on the status of the module development to the development lead Skills/Certifications Upskill regularly with timely completion of mandatory domain/technical certifications Skill Examples Ability to use Domain / Industry Knowledge to independently understand capture the business requirements and fine-tune; interacting with SME at various stages of the development Ability to use Product Design knowledge to design and implement the business and non-functional requirements Ability to use knowledge of Product Features / Functionality to understand the technical dependency of the product workflow; independently analyzing the product applying the best practices in own area of work and impart training on the various functional modules of the product Ability to design install configure troubleshoot CI/CD pipelines Ability to use Software Design & Development knowledge to develop code as per the requirement specifications and user stories. Understand and follow engineering practices take technical responsibility for all stages in the software development process and review process to ensure all practices are being followed Ability to use UX Knowledge to understand user interface design with the implications on product design and development while improving product usability across the user base. Provide necessary inputs to design team that indicates the user profile/segments and savviness of these users so that right trade-off can be achieved Knowledge Examples Domain / Industry Knowledge: Working knowledge of standard business processes within the relevant industry vertical and customer business domain Product Design: Working knowledge of product architecture elements such as client server/SOA based configuration parameters; may specialize in one or more areas Product Features / Functionality: Working knowledge of the product Knowledge of Config/Build/Deploy processes and tools Knowledge of IAAS - Cloud providers (AWS Azure Google etc.) and their tool sets Knowledge of the application development lifecycle – agile and waterfall Knowledge of Quality Assurance processes Knowledge of Quality Automation processes and tools User Experience Knowledge: Basic knowledge of aspects that enhance product/systems usability and improve the overall user experience Additional Comments Role - Java/Angular Fullstack developer Exp - (2-5yrs) The team works closely with the business and is focused on delivering cutting-edge technology to the firm's internal and external clients. This involves enhancing existing systems as well as developing new tools and systems to streamline business processes and enable the business to expand into new areas. The role involves all aspects of the software development life cycle: analysis, design and implementation. We need a flexible and practical technologist who demonstrates excellent problem-solving skills, enjoys all aspects of software development and will contribute to the success of the team. Role Profile: You will work closely with the business and the wider development team to develop new tools and applications You will contribute to larger projects across a global team You will support different phases of the product lifecycle including analysis, development and testing You will be a technically proficient and enthusiastic developer, with a desire to comprehend the full stack in order to help engineer new and existing components You will work in an agile team You will learn about equity derivative products and algorithmic indices Desired Skills: Strong server side Java skills with knowledge of Scala desirable but not essential Experience of working closely with business users Experience of agile & TDD Demonstrable ability to meet deadlines and deliver results. Knowledge of Equity Derivatives is desirable by not essential Outstanding communication and interpersonal skills. Skills Java/Angular Full stack,Agile,development & Testing Show more Show less
Posted 15 hours ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
About the Company : You’ll be part of one of the largest and fastest-growing accounting and advisory firms, with a flexible work environment that supports work-life balance. The company is known for its award-winning culture and values diverse ideas, backgrounds, and experiences. You’re encouraged to show up as your authentic self and be part of a team that thrives on collaboration and inclusion. With active Employee Resource Groups supporting diversity, equity, and inclusion, you’ll find a workplace that truly supports and inspires you to do your best work. About the Role : A short paragraph summarising the key role responsibilities. Responsibilities : Design and develop data ETL routines to integrate data from disparate source systems into an integrated business intelligence environment including defining source-to-target mappings, business rules and other required data transformations. Develop dashboards and reports using Microsoft Power BI, SSRS or other BI technologies. Ensuring the quality of the data through utilization of process and technology. Participation in actively formulating and maintaining both a data catalog and a data dictionary. Providing data marts and data lakes to support business initiatives. Conduct business analysis to define the required data to fulfill on business requirements. Meet with Customers to understand business problems and questions, exploring options to model data driven platforms used for analyzing answers. Become a trusted partner for Customers who translates business problems into analysis; then helps interpret the analysis into actionable steps. Actively participate in the design, development, and maintenance of complex BI Dashboards, Reports, Queries, and integrated applications. Manage other resources to complete the development work needed to deliver the required product to the client. Analyze and share results with users to ensure content meets business requirements; and train users to facilitate adoption and understanding of BI reports and tools. Proactively initiate projects and analysis targeting opportunities for revenue growth, risk mitigation, and operational efficiency. Provide technical support of the Microsoft Platform Software. Comprehensive data analysis and design experience, with full knowledge of data warehouse & data mart methodologies as well analysis modeling. Lead solution and design sessions for end to end for data integration efforts. Deliver technical innovations in support of business growth; progressing ideas from concepts to full scale implementation with focus on flexibility, reuse, and overarching extensibility. Qualifications : 5-7+ years of experience in the Microsoft business intelligence platform, including SQL Server databases, SQL Server Integration Services, Power BI/SSRS & SSAS and/or Azure Cloud Services. 5-7+ years of building and maintaining data integration processes. 5-7+ years supporting large scale data integration and ETL processes including daily monitoring and operational support. Strong experience working in data warehouse architectures. Experience building formatted paginated reports using MS SSRS. Experience with PowerBI dashboard development or experience with one or more BI reporting tools like QlikView/QlikSense, Tableau, Micro Strategy, Open Source reporting is a plus. Experience with the PowerBI Cloud Services. Significant experience in the design and development of data integration solutions and data analysis. Expert in automating data integration routines. Expert in optimization of data integration processing and database design. Expertise in designing and developing dashboards, reports, analytics and data visualization solutions. Experience in creating dimensional data models across multiple subject areas is a plus. Strong command of MS Excel. Experience and familiarity with processes, platforms, and stakeholders. Excellent written and oral communication skills. Excellent technical documentation skills. Ability to work independently in a fast paced environment with minimum over sight. You would own building a future proof integration architecture to solve integration needs across projects. Balances a variety of competing goals in a design, including project time, scope and budget constraints, and system performance, message verbosity, and loose coupling. Preferred Skills : Some experience with advanced analytics tools and languages such as R, Python, Scala, SAS, and others. 1-2+ years’ experience working in a DevOps environment and understanding of the associated tools and capabilities, and value of using this approach. 1-2+ years of cloud-based data platforms (such as Azure, AWS, Snowflake, …). 1-2+ years using CI/CD (continuous integration/continuous deployment) methods for deployment. 1-2+ years using cloud data integration pipelines. Experience setting up and managing Power BI Report server is a plus. Experience working in data lakes and reference data environments. Pay range and compensation package : Work Schedule: Work from office (4 days working from office and 1 day work from home). Timings: 12.30 pm to 9.30 pm. Location – Mumbai, Goregaon East, Nesco. Idea candidate should be residing in max. 1 hour of commute time to office. Interview Mode : 2 In-person round. Show more Show less
Posted 15 hours ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Responsibilities: We are seeking an experienced Data Scientist to lead the development of a Data Science program . You will work closely with various stakeholders to derive deep industry knowledge across paper, water, leather, and performance chemical industries . You will help develop a data strategy for the company including collection of the right data, creation of the data science project portfolio, partnering with external providers , and augmenting capabilities with additional internal hires. A large part of the job is communicating and developing relationships with key stakeholders and subject matter experts to tee up proofs of concept (PoC) projects to demonstrate how data science can solve old problems in unique and novel ways . You will not have a large internal team to rely on, at least initially, so individual expertise, breadth of data science knowledge , and ability to partner with external companies will be essential for success. In addition to the pure data science problems, you will be working closely with a multi-disciplinary team consisting of sensor scientists, software engineers, network engineers, mechanical/electrical engineers, and chemical engineers in the development and deployment of IoT solutions . Basic Qualification: Bachelor’s degree in a quantitative field such as Data Science, Statistics, Applied Mathematics, Physics, Engineering, or Computer Science 5+ years of relevant working experience in an analytical role involving data extraction, analysis, and visualization and expertise in the following areas: Expertise in one or more programming languages : R, Python, MATLAB, JMP, Minitab, Java, C++, Scala Key libraries such as Sklearn, XGBoost, GLMNet, Dplyr, ggplot, RShiny Experience and knowledge of data mining algorithms including supervised and unsupervised machine learning techniques in areas such as Gradient Boosting, Decision Trees, Multivariate Regressions, Logistic Regression, Neural Network, Random Forest, SVM, Naive Bayes, Time Series, Optimization Microsoft IoT/data science toolkit : Azure Machine Learning, Datalake, Datalake Analytics, Workbench, IoT Hub, Stream Analytics, CosmosDB, Time Series Insights, Power BI Data querying languages : SQL, Hadoop/Hive A demonstrated record of success with a verifiable portfolio of problems tackled Preferred Qualifications: Master’s or PhD degree in a quantitative field such as Data Science, Statistics, Applied Mathematics, Physics, Engineering, or Computer Science Experience in the specialty chemicals sector or similar industry Background in engineering, especially Chemical Engineering Experience starting up a data science program Experience working with global stakeholders Experience working in a start-up environment, preferably in an IoT company Knowledge in quantitative modeling tools and statistical analysis Personality Traits: A strong business focus, ownership, and inner self-drive to develop data science solutions to real-world customers with tangible impact. Ability to collaborate effectively with multi-disciplinary and passionate team members . Ability to communicate with a diverse set of stakeholders . Strong planning and organization skills , with the ability to manage multiple complex projects . A life-long learner who constantly updates skills. Show more Show less
Posted 15 hours ago
0 years
0 Lacs
Raipur, Chhattisgarh, India
On-site
Role Summary We are seeking a highly motivated and skilled Data Engineer to join our data and analytics team. This role is ideal for someone with strong experience in building scalable data pipelines, working with modern lakehouse architectures, and deploying data solutions on Microsoft Azure. You’ll be instrumental in developing, orchestrating, and maintaining our real-time and batch data infrastructure using tools like Apache Spark, Apache Kafka, Apache Airflow, Azure Data Services, and modern DevOps practices. Key Responsibilities Design and implement ETL/ELT data pipelines for structured and unstructured data using Azure Data Factory, Databricks, or Apache Spark. Work with Azure Blob Storage, Data Lake, and Synapse Analytics to build scalable data lakes and warehouses. Develop real-time data ingestion pipelines using Apache Kafka, Apache Flink, or Apache Beam. Build and schedule jobs using orchestration tools like Apache Airflow or Dagster. Perform data modeling using Kimball methodology for building dimensional models in Snowflake or other data warehouses. Implement data versioning and transformation using DBT and Apache Iceberg or Delta Lake. Manage data cataloging and lineage using tools like Marquez or Collibra. Collaborate with DevOps teams to containerize solutions using Docker, manage infrastructure with Terraform, and deploy on Kubernetes. Setup and maintain monitoring and alerting systems using Prometheus and Grafana for performance and reliability. Required Skills & Qualifications Programming & Scripting: Proficiency in Python, with strong knowledge of OOP and data structures & algorithms. Comfortable working in Linux environments for development and deployment. Database Technologies: Strong command over SQL and understanding of relational (DBMS) and NoSQL databases. Big Data & Real-Time Processing: Solid experience with Apache Spark (PySpark/Scala). Familiarity with real-time processing tools like Kafka, Flink, or Beam. Orchestration & Scheduling: Hands-on experience with Airflow, Dagster, or similar orchestration tools. Cloud Platform: Deep experience with Microsoft Azure, especially Azure Data Factory, Blob Storage, Synapse, Azure Functions, etc. AZ-900 or other Azure certifications are a plus. Lakehouse & Warehousing Knowledge of dimensional modeling, Snowflake, Apache Iceberg, and Delta Lake. Understanding of modern Lakehouse architecture and related best practices. Data Cataloging & Governance Familiarity with Marquez, Collibra, or other cataloging tools. DevOps & CI/CD Experience with Terraform, Docker, Kubernetes, and Jenkins or equivalent CI/CD tools. Monitoring & Logging Proficiency in setting up dashboards and alerts with Prometheus and Grafana. Note: - Immediate joiner will be preferred. Show more Show less
Posted 15 hours ago
0.6 - 1.6 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose – to uplift everyone, everywhere by being the best way to pay and be paid. Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa. Job Description Work collaboratively with Data Analyst, Data Scientists Software Engineers and cross-functional partners to design and deploy data pipelines to deliver analytical solution. Responsible for building data pipelines, data model, data marts, data warehouse including OLAP cube in multidimensional data model with proficiency / conceptual understanding of PySpark and SQL scripting. Responsible for the design, development, testing, implementation and support functional semantic data marts using various modeling techniques from underlying data stores/data warehouse and facilitate Business Intelligence Data Solutions Experience in building reports, dashboards, scorecards & visualization using Tableau/ Power BI and other data analysis techniques to collect, explore, and extract insights from structured and unstructured data. Responsible for AI/ML model Utilizing machine learning, statistical methods, data mining, forecasting and predictive modeling techniques. Following Dev Ops Model, Agile implementation, CICD method of deployment & JIRA creation / management for projects. Define and build technical/data documentation and experience with code version control systems (for e.g., git). Assist owner with periodic evaluation of next generation & modernization of platform. Exhibit Leadership Principles such as Accountability & Ownership of High Standards: Given the criticality & sensitivity of data . Customer Focus : Going Above & Beyond in finding innovative solution and product to best serve the business needs and there-by Visa. This is a hybrid position. Expectation of days in office will be confirmed by your hiring manager. Qualifications Basic Qualifications • Bachelors degree or •0.6-1.6 years of work experience with a Bachelor’s Degree or Master's Degree in computer / information science with relevant work experience in IT industry •Enthusiastic, energetic and self-learning candidates with loads of curiosity and flexibility. •Proven hands-on capability in the development of data pipelines and data engineering. •Experience in creating data-driven business solutions and solving data problems using technologies such as Hadoop, Hive, and Spark. •Ability to program in one or more scripting languages such as Python and one or more programming languages such as Java or Scala. •Familiarity with AI-centric libraries like TensorFlow, PyTorch, and Keras. •Familiarity with machine learning algorithms and statistical models is beneficial. •Critical ability to interpret complex data and provide actionable insights. This encompasses statistical analysis, predictive modeling, and data visualization. •Extended experience in Agile Release Management practices, governance, and planning. •Strong leadership skills with demonstrated ability to lead global, cross-functional teams. Additional Information Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law. Show more Show less
Posted 15 hours ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At Unify Technologies , we’re hiring Scala Developers who are passionate about functional programming and ready to make a real impact. 🔹 Role: Scala Developer 🔹 Experience: 3+ Years 🔹 Location: Hyderabad 🔹 Tech Stack: Scala, Play Framework, Akka, Lagom, Slick Should be very strong Scala development(Coding) With Any combination of Java/Python/Spark/Bigdata 3+ years experience in Core Java/Scala with good understanding of multithreading The candidate must be good with Computer Science fundamentals Exposure to python/perl and Unix / K-Shell scripting Code management tools such as Git/Perforce. Experience with large batch-oriented systems DB2/Sybase or any RDBMS Prior experience with financial products, particularly OTC Derivatives Exposure to counterparty risk, margining, collateral or confirmation systems Show more Show less
Posted 16 hours ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Description Job Title : Senior Data Engineer Experience : 7+ years Location : Trivandrum, Kochi, Hyderabad Job Description We are looking for a highly skilled Data Engineer with over 7 years of experience in backend development and data engineering. The ideal candidate should be a self-driven individual contributor with deep expertise in cloud services , data processing frameworks , and modern backend technologies . Mandatory Skills Strong hands-on experience with Spark/Scala, AWS, and Python Solid experience working with AWS cloud services Strong understanding of backend development principles Good To Have Skills Experience with additional big data tools and ecosystems Familiarity with CI/CD pipelines and DevOps practices Exposure to data modeling and data warehousing concepts Knowledge of containerization (Docker, Kubernetes) Additional Requirements Excellent problem-solving skills Ability to work independently with minimal supervision Strong communication and collaboration abilities Skills Spark/Scala,Python,AWS Show more Show less
Posted 16 hours ago
5.0 years
0 Lacs
India
Remote
Hi Everyone Role : Senior Data scientist - AWS - sagemaker, MLopsMflowAWSAWS SagemakerAWS Data ZoneProgramming Language Shift - 12pm to 9 pm IST 8 hour Exp - 5+yr Position Type : Remote & Contractual JD : Primary Skills : MLopsMflowAWSAWS SagemakerAWS Data ZoneProgramming Language Secondary Skills : PythonRScalaIntegration Job Description : Shift: 12 PM-9 PM IST Mandatory - sagemaker, Mflow 5+ years of work experience in Software Engineering and MLOps Adhere to best practices for developing scalable, reliable, and secure applications Development experience on AWS, AWS Sagemaker required. AWS Data Zone experience is preferred Experience with one or more general purpose programming languages including but not limited to: Python, R, Scala, Spark Experience with production-grade development, integration, and support Candidate with good analytical mindset and smart candidate who will help us in Research in MLOps area Show more Show less
Posted 16 hours ago
7.0 years
0 Lacs
India
Remote
About Lemongrass Lemongrass is a software-enabled services provider, synonymous with SAP on Cloud, focused on delivering superior, highly automated Managed Services to Enterprise customers. Our customers span multiple verticals and geographies across the Americas, EMEA and APAC. We partner with AWS, SAP, Microsoft and other global technology leaders. We are seeking an experienced Cloud Data Engineer with a strong background in AWS, Azure, and GCP. The ideal candidate will have extensive experience with cloud-native ETL tools such as AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, and other ETL tools like Informatica, SAP Data Intelligence, etc. You will be responsible for designing, implementing, and maintaining robust data pipelines and building scalable data lakes. Experience with various data platforms like Redshift, Snowflake, Databricks, Synapse, Snowflake and others is essential. Familiarity with data extraction from SAP or ERP systems is a plus. Key Responsibilities: Design and Development: Design, develop, and maintain scalable ETL pipelines using cloud-native tools (AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, etc.). Architect and implement data lakes and data warehouses on cloud platforms (AWS, Azure, GCP). Develop and optimize data ingestion, transformation, and loading processes using Databricks, Snowflake, Redshift, BigQuery and Azure Synapse. Implement ETL processes using tools like Informatica, SAP Data Intelligence, and others. Develop and optimize data processing jobs using Spark Scala. Data Integration and Management: Integrate various data sources, including relational databases, APIs, unstructured data, and ERP systems into the data lake. Ensure data quality and integrity through rigorous testing and validation. Perform data extraction from SAP or ERP systems when necessary. Performance Optimization: Monitor and optimize the performance of data pipelines and ETL processes. Implement best practices for data management, including data governance, security, and compliance. Collaboration and Communication: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Collaborate with cross-functional teams to design and implement data solutions that meet business needs. Documentation and Maintenance: Document technical solutions, processes, and workflows. Maintain and troubleshoot existing ETL pipelines and data integrations. Qualifications Education: Bachelor’s degree in Computer Science, Information Technology, or a related field. Advanced degrees are a plus. Experience: 7+ years of experience as a Data Engineer or in a similar role. Proven experience with cloud platforms: AWS, Azure, and GCP. Hands-on experience with cloud-native ETL tools such as AWS DMS, AWS Glue, Kafka, Azure Data Factory, GCP Dataflow, etc. Experience with other ETL tools like Informatica, SAP Data Intelligence, etc. Experience in building and managing data lakes and data warehouses. Proficiency with data platforms like Redshift, Snowflake, BigQuery, Databricks, and Azure Synapse. Experience with data extraction from SAP or ERP systems is a plus. Strong experience with Spark and Scala for data processing. Skills: Strong programming skills in Python, Java, or Scala. Proficient in SQL and query optimization techniques. Familiarity with data modeling, ETL/ELT processes, and data warehousing concepts. Knowledge of data governance, security, and compliance best practices. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Qualifications: Experience with other data tools and technologies such as Apache Spark, or Hadoop. Certifications in cloud platforms (AWS Certified Data Analytics – Specialty, Google Professional Data Engineer, Microsoft Certified: Azure Data Engineer Associate). Experience with CI/CD pipelines and DevOps practices for data engineering Selected applicant will be subject to a background investigation, which will be conducted and the results of which will be used in compliance with applicable law. What we offer in return: Remote Working: Lemongrass always has been and always will offer 100% remote work Flexibility: Work where and when you like most of the time Training: A subscription to A Cloud Guru and generous budget for taking certifications and other resources you’ll find helpful State of the art tech: An opportunity to learn and run the latest industry standard tools Team: Colleagues who will challenge you giving the chance to learn from them and them from you Lemongrass Consulting is proud to be an Equal Opportunity and Affirmative Action employer. We do not discriminate on the basis of race, religion, color, national origin, religious creed, gender, sexual orientation, gender identity, gender expression, age, genetic information, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics Show more Show less
Posted 16 hours ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
🔹 Key Responsibilities: Design, develop, and maintain Scala-based microservices Build scalable and reactive systems using Akka or LEGOM framework Implement real-time data pipelines with Apache Pulsar Develop and optimize data access using Slick Connector and PostgreSQL Build advanced search capabilities using ElasticSearch Work on containerized applications and deploy them using Kubernetes Set up and manage CI/CD pipelines using GitLab Collaborate with cross-functional teams to ensure on-time, high-quality deliveries Follow best practices in testing, performance tuning, and security 🔹 Required Skills: Strong hands-on experience with Scala Proficient in Akka or LEGOM framework Expertise in Microservices architecture and containerization Knowledge of Apache Pulsar for streaming Experience in PostgreSQL and Slick Connector for DB integration Proficient with ElasticSearch Familiarity with GitLab , CI/CD Pipelines , and Kubernetes (K8s) Excellent problem-solving and debugging skills Strong communication and collaboration abilities - Show more Show less
Posted 16 hours ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Are you a passionate Spark and Scala developer looking for an exciting opportunity to work on cutting-edge big data projects? Look no further! Delhivery is seeking a talented and motivated Spark & Scala Expert to join our dynamic team. Responsibilities: Develop and optimize Spark applications to process large-scale data efficiently Collaborate with cross-functional teams to design and implement data-driven solutions Troubleshoot and resolve performance issues in Spark jobs Stay up-to-date with the latest trends and advancements in Spark and Scala technologies. Requirements: Proficient in Redshift, data pipelines, Kafka, Real-time streaming, connectors, etc 3+ years of professional experience with Big Data systems, pipelines, and data processing Strong experience with Apache Spark, Spark Streaming, and Spark SQL Solid understanding of distributed systems, Databases, System design, and big data processing framework Familiarity with Hadoop ecosystem components (HDFS, Hive, HBase) is a plus Show more Show less
Posted 16 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2